Superintelligence: Paths, Dangers, Strategies

Superintelligence: Paths, Dangers, Strategies

by Nick Bostrom

Paperback

$13.56 $15.95 Save 15% Current price is $13.56, Original price is $15.95. You Save 15%.
View All Available Formats & Editions
Choose Expedited Shipping at checkout for guaranteed delivery by Thursday, March 21

Product Details

ISBN-13: 9780198739838
Publisher: Oxford University Press
Publication date: 05/01/2016
Pages: 390
Sales rank: 45,926
Product dimensions: 5.10(w) x 7.60(h) x 1.00(d)

About the Author

Nick Bostrom is Professor in the Faculty of Philosophy at Oxford University and founding Director of the Future of Humanity Institute and of the Program on the Impacts of Future Technology within the Oxford Martin School. He is the author of some 200 publications, including Anthropic Bias (Routledge, 2002), Global Catastrophic Risks (ed., OUP, 2008), and Human Enhancement (ed., OUP, 2009). He previously taught at Yale, and he was a Postdoctoral Fellow of the British Academy. Bostrom has a background in physics, computational neuroscience, and mathematical logic as well as philosophy.

Table of Contents

Preface
1. Past Developments and Present Capabilities
2. Roads to Superintelligence
3. Forms of Superintelligence
4. Singularity Dynamics
5. Decisive Strategic Advantage
6. Intellectual Superpowers
7. The Superintelligent Will
8. Is the Default Outcome Doom?
9. The Control Problem
10. Oracles, Genies, Sovereigns, Tools
11. Multipolar Scenarios
12. Acquiring Values
13. Design Choices
14. The Strategic Picture
15. Nut-Cutting Time

Customer Reviews

Most Helpful Customer Reviews

See All Customer Reviews

Superintelligence: Paths, Dangers, Strategies 4 out of 5 based on 0 ratings. 3 reviews.
Joshua_Cogliati More than 1 year ago
I found the book, Superintelligence: Paths, Dangers, Strategies by Nick Bostrom (ISBN: 978-0-19-967811-2) fascinating, and it examined many factors I had not even thought about. It discusses the different paths to superintelligence, such as creating new artificial intelligence, emulating whole human brains in computers, creating new biological entities, brain-computer interfaces and networks of humans. It discusses possible consequences of this, and what humans can do or not do about this. The book discusses the speed at which the transition from no superintelligence to a superintelligence might happen. It probably will be measured in days to months, not years. Some of the methods of failure in the chapter Malignant Failure Modes seem to me to be unrealistic. Basically they require a certain combination of the AI being smart enough to achieve goals in the real world, but lacking to common sense to know when the goal has been achieved in any real sense or that a better goal needs to be substituted. It gives the example of a robot that has been told to build paperclips, and converts the entire universe to a paperclip factory. I think this kind of failure mode at least seems somewhat unrealistic to me. It does seem possible (human certainly have rationally worked on insane goals (such as the members of the Heaven's Gate cult)). My second main disagreement with the book is how its attitude is that we need to control the superintelligence. I think this in some sense unrealistic (which the book does acknowledge that this is a challenge). In reality, we are creating God or Gods, and any control we have is an illusion, so calling it the "control" problem is misleading. In my opinion it is actually an ethics problem. How do we make an ethical god? (Or if you prefer, we are as ants, how do we communicate to the humans an ethical belief that the humans will consider.) My last main question is does thinking saturate? How much thinking does it take before you run out of new things to think. Basically, the book envisions that an Artificial Intelligence would consider converting the entire universe into a thinking machine. I know that humans have never run out of new things to think about, but just how far does this extend? For example, take an asteroid, move it to roughly Venus's orbit, convert it to a massive solar panel and computer and other support structures, and you have a staggering amount of computing power. Does that computer run out of new things to think of? Overall, I found the book fascinating, and was very glad to have read it, and I recommend it to anyone who is thinking about what happens when we have artificial intelligences that are vastly smarter than humans around.
Anonymous More than 1 year ago
The ebook is missing the back cover of the print book. Overall amazing book. Required reading for all.
Anonymous More than 1 year ago
Found it very hard to get through