Brian Clegg's Reviews > Superintelligence: Paths, Dangers, Strategies
Superintelligence: Paths, Dangers, Strategies
by Nick Bostrom
by Nick Bostrom
There has been a spate of outbursts from physicists who should know better, including Stephen Hawking, saying ‘philosophy is dead – all we need now is physics’ or words to that effect. I challenge any of them to read this book and still say that philosophy is pointless.
It’s worth pointing out immediately that this isn’t really a popular science book. I’d say the first handful of chapters are for everyone, but after that, the bulk of the book would probably be best for undergraduate philosophy students or AI students, reading more like a textbook than anything else, particularly in its dogged detail – but if you are interested in philosophy and/or artificial intelligence, don’t let that put you off.
What Nick Bostrom does is to look at the implications of developing artificial intelligence that goes beyond human abilities in the general sense. (Of course, we already have a sort of AI that goes beyond our abilities in the narrow sense of, say, arithmetic, or playing chess.) In the first couple of chapters he examines how this might be possible – and points out that the timescale is very vague. (Ever since electronic computers have been invented, pundits have been putting the development of effective AI around 20 years in the future, and it’s still the case.) Even so, it seems entirely feasible that we will have a more than human AI – a superintelligent AI – by the end of the century. But the ‘how’ aspect is only a minor part of this book.
The real subject here is how we would deal with such a ‘cleverer than us’ AI. What would we ask it to do? How would we motivate it? How would we control it? And, bearing in mind it is more intelligent than us, how would we prevent it taking over the world or subverting the tasks we give it to its own ends? It is truly fascinating concept, explored in great depth here. This is genuine, practical philosophy. The development of super-AIs may well happen – and if we don’t think through the implications and how we would deal with it, we could well be stuffed as a species.
I think it’s a shame that Bostrom doesn’t make more use of science fiction to give examples of how people have already thought about these issues – he gives only half a page to Asimov and the three laws of robotics (and how Asimov then spends most of his time showing how they’d go wrong), but that’s about it. Yet there has been a lot of thought and dare I say it, a lot more readability than you typically get in a textbook, put into the issues in science fiction than is being allowed for, and it would have been worthy of a chapter in its own right.
I also think a couple of the fundamentals aren’t covered well enough, but pretty much assumed. One is that it would be impossible to contain and restrict such an AI. Although some effort is put into this, I’m not sure there is enough thought put into the basics of ways you can pull the plug manually – if necessary by shutting down the power station that provides the AI with electricity.
The other dubious assertion was originally made by I. J. Good, who worked with Alan Turing, and seems to be taken as true without analysis. This is the suggestion that an ultra-intelligent machine would inevitably be able to design a better AI than humans, so once we build one it will rapidly improve on itself, producing an ‘intelligence explosion’. I think the trouble with this argument is that my suspicion is that if you got hold of the million most intelligent people on earth, the chances are that none of them could design an ultra-powerful computer at the component level. Just because something is superintelligent doesn’t mean it can do this specific task well – this is an assumption.
However this doesn’t set aside what a magnificent conception the book is. I don’t think it will appeal to many general readers, but I do think it ought to be required reading on all philosophy undergraduate courses, by anyone attempting to build AIs… and by physicists who think there is no point to philosophy.
It’s worth pointing out immediately that this isn’t really a popular science book. I’d say the first handful of chapters are for everyone, but after that, the bulk of the book would probably be best for undergraduate philosophy students or AI students, reading more like a textbook than anything else, particularly in its dogged detail – but if you are interested in philosophy and/or artificial intelligence, don’t let that put you off.
What Nick Bostrom does is to look at the implications of developing artificial intelligence that goes beyond human abilities in the general sense. (Of course, we already have a sort of AI that goes beyond our abilities in the narrow sense of, say, arithmetic, or playing chess.) In the first couple of chapters he examines how this might be possible – and points out that the timescale is very vague. (Ever since electronic computers have been invented, pundits have been putting the development of effective AI around 20 years in the future, and it’s still the case.) Even so, it seems entirely feasible that we will have a more than human AI – a superintelligent AI – by the end of the century. But the ‘how’ aspect is only a minor part of this book.
The real subject here is how we would deal with such a ‘cleverer than us’ AI. What would we ask it to do? How would we motivate it? How would we control it? And, bearing in mind it is more intelligent than us, how would we prevent it taking over the world or subverting the tasks we give it to its own ends? It is truly fascinating concept, explored in great depth here. This is genuine, practical philosophy. The development of super-AIs may well happen – and if we don’t think through the implications and how we would deal with it, we could well be stuffed as a species.
I think it’s a shame that Bostrom doesn’t make more use of science fiction to give examples of how people have already thought about these issues – he gives only half a page to Asimov and the three laws of robotics (and how Asimov then spends most of his time showing how they’d go wrong), but that’s about it. Yet there has been a lot of thought and dare I say it, a lot more readability than you typically get in a textbook, put into the issues in science fiction than is being allowed for, and it would have been worthy of a chapter in its own right.
I also think a couple of the fundamentals aren’t covered well enough, but pretty much assumed. One is that it would be impossible to contain and restrict such an AI. Although some effort is put into this, I’m not sure there is enough thought put into the basics of ways you can pull the plug manually – if necessary by shutting down the power station that provides the AI with electricity.
The other dubious assertion was originally made by I. J. Good, who worked with Alan Turing, and seems to be taken as true without analysis. This is the suggestion that an ultra-intelligent machine would inevitably be able to design a better AI than humans, so once we build one it will rapidly improve on itself, producing an ‘intelligence explosion’. I think the trouble with this argument is that my suspicion is that if you got hold of the million most intelligent people on earth, the chances are that none of them could design an ultra-powerful computer at the component level. Just because something is superintelligent doesn’t mean it can do this specific task well – this is an assumption.
However this doesn’t set aside what a magnificent conception the book is. I don’t think it will appeal to many general readers, but I do think it ought to be required reading on all philosophy undergraduate courses, by anyone attempting to build AIs… and by physicists who think there is no point to philosophy.
Sign into Goodreads to see if any of your friends have read
Superintelligence.
Sign In »
Reading Progress
| 07/01/2014 | marked as: | read | ||
Comments (showing 1-10 of 10) (10 new)
date
newest »
newest »
As a physicist myself, I think that when Stephen Hawking and others say that philosophy is dead, they essentially refer to metaphysics.Indeed, the history of human ideas is full of philosophical answers to both small and grand questions, which we today regard as totally unfounded if not bizarre.
But this is not say that philosophy is pointless! Just that the work of a modern philosopher, able to interact productively with the rest of the intellectual community, should be more about asking the right questions (the questions that scientists alone don't see) than building the right answers.
In this respect, Nick Bostrom is doing a wonderful job.
I largely agree with your review here, especially the point that he could have made better use of SF -- I felt that myself.He forced me to reconsider much of my position on AI, though I'm dubious that a "fast takeoff" is likely, in the sense Bostrom discusses. Take a look at my rather long review if you're interested in a discussion of how hard it is to be smart.
Hawking: "Why are we here? Where do we come from? Traditionally, these are questions for philosophy, but philosophy is dead,” he said. “Philosophers have not kept up with modern developments in science. Particularly physics.”I think extracting 'philosophy is dead' from that alone is a bit misleading. But, I agree with you, Bostrom avoids thisis an exception... which makes sense, as has degree in physics!
Oh, and perhaps he avoided referencing science fiction too much on purpose - to distance his book from it?
Yea I agree, I think he'd probably just read the synopses on the wikipedia and suchBen wrote: "Bostrom doesn't read sci-fi, it doesn't do much for him."
FYI I say that because I've heard him say it before. I can't recall where though, but it might've been on a podcast he was invited on along with guest Luke Muehlhauser, although I forget the name of the podcast (it wasn't Muehlhauser's own one, Pale Blue Dot).
"I think the trouble with this argument is that my suspicion is that if you got hold of the million most intelligent people on earth, the chances are that none of them could design an ultra-powerful computer at the component level. Just because something is superintelligent doesn’t mean it can do this specific task well – this is an assumption."
The assumption is probably based on the common observation that smart people can learn to do most cognitively-demanding tasks better than less-smart people. We have IQ as a measure of g ("general intelligence") because psychometricians discovered a long time ago that an individual's performance on one type of intelligence test correlates with that individual's performance on other seemingly unrelated intelligence tests. That's why we don't have multiple IQ tests to predict an individual's performance in every specific discipline.
The US armed forces save millions of dollars every year in training costs by administering one battery of tests to recruits, which then predicts their ability to learn hundreds of different disciplines (and the list continually changes).
The definition of "general" AI as opposed to narrow AI is that the former would mimic the human's general intelligence, i.e. the human's general ability to learn and master a wide variety of domains. Narrow AI, in contrast, merely simulates a particular slice of human cognitive skill, such as facial recognition or vehicle navigation.
Most if not all of the million smartest people could probably learn to design computers, if they had the time and inclination. What would today's smart computer designers be doing if they had been born 500 years ago? Probably some other mentally demanding tasks, if their life circumstances gave them a choice. It would be odd if humans had somehow evolved specialized mental skills that could only be useful after the year 1950. Given that evolution has no look-ahead capability, the mental skills involved in designing computers have to come from the same cognitive toolkit that helped our ancestors bring down mammoths during the Pleistocene.
The assumption that designing incrementally better computers requires a mental ability that is distinctly different from all other mental abilities seems shaky to me.
In any case, computers don't have to be entirely self-improving to escape human control. Already humans rely on computers to help them design the next computers. If for some reason a human has to stay in the design loop forever, sufficiently smart computers could arrange for that to happen. Plenty of humans are willing to be in the design loop right now.
Lots of biological invasive species are out of control, and they have no self-improvement capacity to speak of (other than the very slow pace of biological evolution). Things that are much dumber than humans get out of human control all the time. Even the dumber humans can prey on the smarter humans (see: crime).


I would distinguish between the propositions 'philosophy is pointless' and 'the majority of the work being done under the name of the philosophy departments of modern academia is useless'. The former I would certainly disagree with, yet the latter not.
This is a good, relevant talk: http://www.youtube.com/watch?v=YLvWz9...