A New York Times bestseller
Superintelligence asks the questions: What happens when machines surpass humans in general intelligence? Will artificial agents save or destroy us? Nick Bostrom lays the foundation for understanding the future of humanity and intelligent life.
The human brain has some capabilities that the brains of other animals lack. It is to these distinctive capabilities that our species owes its dominant position. If machine brains surpassed human brains in general intelligence, then this new superintelligence could become extremely powerful - possibly beyond our control. As the fate of the gorillas now depends more on humans than on the species itself, so would the fate of humankind depend on the actions of the machine superintelligence.
But we have one advantage: we get to make the first move. Will it be possible to construct a seed Artificial Intelligence, to engineer initial conditions so as to make an intelligence explosion survivable? How could one achieve a controlled detonation?
This profoundly ambitious and original book breaks down a vast track of difficult intellectual terrain. After an utterly engrossing journey that takes us to the frontiers of thinking about the human condition and the future of intelligent life, we find in Nick Bostrom's work nothing less than a reconceptualization of the essential task of our time.
|Publisher:||Oxford University Press|
|Product dimensions:||5.10(w) x 7.60(h) x 1.00(d)|
About the Author
Nick Bostrom is Professor in the Faculty of Philosophy at Oxford University and founding Director of the Future of Humanity Institute and of the Program on the Impacts of Future Technology within the Oxford Martin School. He is the author of some 200 publications, including Anthropic Bias (Routledge, 2002), Global Catastrophic Risks (ed., OUP, 2008), and Human Enhancement (ed., OUP, 2009). He previously taught at Yale, and he was a Postdoctoral Fellow of the British Academy. Bostrom has a background in physics, computational neuroscience, and mathematical logic as well as philosophy.
Table of Contents
1. Past Developments and Present Capabilities
2. Roads to Superintelligence
3. Forms of Superintelligence
4. Singularity Dynamics
5. Decisive Strategic Advantage
6. Intellectual Superpowers
7. The Superintelligent Will
8. Is the Default Outcome Doom?
9. The Control Problem
10. Oracles, Genies, Sovereigns, Tools
11. Multipolar Scenarios
12. Acquiring Values
13. Design Choices
14. The Strategic Picture
15. Nut-Cutting Time
Most Helpful Customer Reviews
I found the book, Superintelligence: Paths, Dangers, Strategies by Nick Bostrom (ISBN: 978-0-19-967811-2) fascinating, and it examined many factors I had not even thought about. It discusses the different paths to superintelligence, such as creating new artificial intelligence, emulating whole human brains in computers, creating new biological entities, brain-computer interfaces and networks of humans. It discusses possible consequences of this, and what humans can do or not do about this. The book discusses the speed at which the transition from no superintelligence to a superintelligence might happen. It probably will be measured in days to months, not years. Some of the methods of failure in the chapter Malignant Failure Modes seem to me to be unrealistic. Basically they require a certain combination of the AI being smart enough to achieve goals in the real world, but lacking to common sense to know when the goal has been achieved in any real sense or that a better goal needs to be substituted. It gives the example of a robot that has been told to build paperclips, and converts the entire universe to a paperclip factory. I think this kind of failure mode at least seems somewhat unrealistic to me. It does seem possible (human certainly have rationally worked on insane goals (such as the members of the Heaven's Gate cult)). My second main disagreement with the book is how its attitude is that we need to control the superintelligence. I think this in some sense unrealistic (which the book does acknowledge that this is a challenge). In reality, we are creating God or Gods, and any control we have is an illusion, so calling it the "control" problem is misleading. In my opinion it is actually an ethics problem. How do we make an ethical god? (Or if you prefer, we are as ants, how do we communicate to the humans an ethical belief that the humans will consider.) My last main question is does thinking saturate? How much thinking does it take before you run out of new things to think. Basically, the book envisions that an Artificial Intelligence would consider converting the entire universe into a thinking machine. I know that humans have never run out of new things to think about, but just how far does this extend? For example, take an asteroid, move it to roughly Venus's orbit, convert it to a massive solar panel and computer and other support structures, and you have a staggering amount of computing power. Does that computer run out of new things to think of? Overall, I found the book fascinating, and was very glad to have read it, and I recommend it to anyone who is thinking about what happens when we have artificial intelligences that are vastly smarter than humans around.
The ebook is missing the back cover of the print book. Overall amazing book. Required reading for all.
Found it very hard to get through