Brian’s review of Superintelligence: Paths, Dangers, Strategies > Likes and Comments
171 likes · Like
As a physicist myself, I think that when Stephen Hawking and others say that philosophy is dead, they essentially refer to metaphysics.
Indeed, the history of human ideas is full of philosophical answers to both small and grand questions, which we today regard as totally unfounded if not bizarre.
But this is not say that philosophy is pointless! Just that the work of a modern philosopher, able to interact productively with the rest of the intellectual community, should be more about asking the right questions (the questions that scientists alone don't see) than building the right answers.
In this respect, Nick Bostrom is doing a wonderful job.
I largely agree with your review here, especially the point that he could have made better use of SF -- I felt that myself.
He forced me to reconsider much of my position on AI, though I'm dubious that a "fast takeoff" is likely, in the sense Bostrom discusses. Take a look at my rather long review if you're interested in a discussion of how hard it is to be smart.
Hawking: "Why are we here? Where do we come from? Traditionally, these are questions for philosophy, but philosophy is dead,” he said. “Philosophers have not kept up with modern developments in science. Particularly physics.”
I think extracting 'philosophy is dead' from that alone is a bit misleading. But, I agree with you, Bostrom avoids thisis an exception... which makes sense, as he has a degree in physics!
Oh, and perhaps he avoided referencing science fiction too much on purpose - to distance his book from it?
Yea I agree, I think he'd probably just read the synopses on the wikipedia and such
Ben wrote: "Bostrom doesn't read sci-fi, it doesn't do much for him."
FYI I say that because I've heard him say it before. I can't recall where though, but it might've been on a podcast he was invited on along with guest Luke Muehlhauser, although I forget the name of the podcast (it wasn't Muehlhauser's own one, Pale Blue Dot).
The assumption is probably based on the common observation that smart people can learn to do most cognitively-demanding tasks better than less-smart people. We have IQ as a measure of g ("general intelligence") because psychometricians discovered a long time ago that an individual's performance on one type of intelligence test correlates with that individual's performance on other seemingly unrelated intelligence tests. That's why we don't have multiple IQ tests to predict an individual's performance in every specific discipline.
The US armed forces save millions of dollars every year in training costs by administering one battery of tests to recruits, which then predicts their ability to learn hundreds of different disciplines (and the list continually changes).
The definition of "general" AI as opposed to narrow AI is that the former would mimic the human's general intelligence, i.e. the human's general ability to learn and master a wide variety of domains. Narrow AI, in contrast, merely simulates a particular slice of human cognitive skill, such as facial recognition or vehicle navigation.
Most if not all of the million smartest people could probably learn to design computers, if they had the time and inclination. What would today's smart computer designers be doing if they had been born 500 years ago? Probably some other mentally demanding tasks, if their life circumstances gave them a choice. It would be odd if humans had somehow evolved specialized mental skills that could only be useful after the year 1950. Given that evolution has no look-ahead capability, the mental skills involved in designing computers have to come from the same cognitive toolkit that helped our ancestors bring down mammoths during the Pleistocene.
The assumption that designing incrementally better computers requires a mental ability that is distinctly different from all other mental abilities seems shaky to me.
In any case, computers don't have to be entirely self-improving to escape human control. Already humans rely on computers to help them design the next computers. If for some reason a human has to stay in the design loop forever, sufficiently smart computers could arrange for that to happen. Plenty of humans are willing to be in the design loop right now.
Lots of biological invasive species are out of control, and they have no self-improvement capacity to speak of (other than the very slow pace of biological evolution). Things that are much dumber than humans get out of human control all the time. Even the dumber humans can prey on the smarter humans (see: crime).
I think Bostrom is consciously avoiding a lot of SF as metaphor for thinking about AI for a few reasons. The Giggle factor, and the degree to which most SF anthropomorphizes AI. I"m a SF writer reading the book as research and I'm finding it amazing; so many of the observations feel common sensical on reading, but you realize, you've never thought of them yourself.
I think it's pretty clear that the reason he didn't elicit more examples from sci-fi in his book is because people have a tendency to dismiss the points, predictions, and especially the warnings contained therein as science fiction. I myself tried to engage in conversation with a friend on some of the topics contained in this book, but he dismissed its ideas saying, "that's just skynet." I personally thoroughly enjoy hard A.I. themed sci-fi, however, I definitely understand the author's reluctance to reference those themes in this important work, as it might prevent people from taking it seriously. I was also surprised to see that you only gave the book 3 out of 5 stars.
Regarding „pulling the plug“ of an AI in emergency. . . I have read Max Tegmark‘s book about this topic (Life 3.0) and he explained it so: Once a super-AI breaks out of it’s designed confinement it makes seeding copies of itself all over the world. So shutting down one power facility would be without any consequences at all.
And regarding your doubt it would not necessarily be able to built another super AI by itself: A super AI doesn’t have to know about the components itself, it is going to be able to hire people to do stuff for it and people would not even know what project they are working on thanks to compartmentalization.
Brian, just stumbled on this review and found it thoughtful. Curious if you have any recommendations for the same subject matter which you feel is better than this one? Thanks!
back to top
date
newest »
newest »
As a physicist myself, I think that when Stephen Hawking and others say that philosophy is dead, they essentially refer to metaphysics.Indeed, the history of human ideas is full of philosophical answers to both small and grand questions, which we today regard as totally unfounded if not bizarre.
But this is not say that philosophy is pointless! Just that the work of a modern philosopher, able to interact productively with the rest of the intellectual community, should be more about asking the right questions (the questions that scientists alone don't see) than building the right answers.
In this respect, Nick Bostrom is doing a wonderful job.
I largely agree with your review here, especially the point that he could have made better use of SF -- I felt that myself.He forced me to reconsider much of my position on AI, though I'm dubious that a "fast takeoff" is likely, in the sense Bostrom discusses. Take a look at my rather long review if you're interested in a discussion of how hard it is to be smart.
Hawking: "Why are we here? Where do we come from? Traditionally, these are questions for philosophy, but philosophy is dead,” he said. “Philosophers have not kept up with modern developments in science. Particularly physics.”I think extracting 'philosophy is dead' from that alone is a bit misleading. But, I agree with you, Bostrom avoids thisis an exception... which makes sense, as he has a degree in physics!
Oh, and perhaps he avoided referencing science fiction too much on purpose - to distance his book from it?
Yea I agree, I think he'd probably just read the synopses on the wikipedia and suchBen wrote: "Bostrom doesn't read sci-fi, it doesn't do much for him."
FYI I say that because I've heard him say it before. I can't recall where though, but it might've been on a podcast he was invited on along with guest Luke Muehlhauser, although I forget the name of the podcast (it wasn't Muehlhauser's own one, Pale Blue Dot).
"I think the trouble with this argument is that my suspicion is that if you got hold of the million most intelligent people on earth, the chances are that none of them could design an ultra-powerful computer at the component level. Just because something is superintelligent doesn’t mean it can do this specific task well – this is an assumption."
The assumption is probably based on the common observation that smart people can learn to do most cognitively-demanding tasks better than less-smart people. We have IQ as a measure of g ("general intelligence") because psychometricians discovered a long time ago that an individual's performance on one type of intelligence test correlates with that individual's performance on other seemingly unrelated intelligence tests. That's why we don't have multiple IQ tests to predict an individual's performance in every specific discipline.
The US armed forces save millions of dollars every year in training costs by administering one battery of tests to recruits, which then predicts their ability to learn hundreds of different disciplines (and the list continually changes).
The definition of "general" AI as opposed to narrow AI is that the former would mimic the human's general intelligence, i.e. the human's general ability to learn and master a wide variety of domains. Narrow AI, in contrast, merely simulates a particular slice of human cognitive skill, such as facial recognition or vehicle navigation.
Most if not all of the million smartest people could probably learn to design computers, if they had the time and inclination. What would today's smart computer designers be doing if they had been born 500 years ago? Probably some other mentally demanding tasks, if their life circumstances gave them a choice. It would be odd if humans had somehow evolved specialized mental skills that could only be useful after the year 1950. Given that evolution has no look-ahead capability, the mental skills involved in designing computers have to come from the same cognitive toolkit that helped our ancestors bring down mammoths during the Pleistocene.
The assumption that designing incrementally better computers requires a mental ability that is distinctly different from all other mental abilities seems shaky to me.
In any case, computers don't have to be entirely self-improving to escape human control. Already humans rely on computers to help them design the next computers. If for some reason a human has to stay in the design loop forever, sufficiently smart computers could arrange for that to happen. Plenty of humans are willing to be in the design loop right now.
Lots of biological invasive species are out of control, and they have no self-improvement capacity to speak of (other than the very slow pace of biological evolution). Things that are much dumber than humans get out of human control all the time. Even the dumber humans can prey on the smarter humans (see: crime).
I think Bostrom is consciously avoiding a lot of SF as metaphor for thinking about AI for a few reasons. The Giggle factor, and the degree to which most SF anthropomorphizes AI. I"m a SF writer reading the book as research and I'm finding it amazing; so many of the observations feel common sensical on reading, but you realize, you've never thought of them yourself.
I think it's pretty clear that the reason he didn't elicit more examples from sci-fi in his book is because people have a tendency to dismiss the points, predictions, and especially the warnings contained therein as science fiction. I myself tried to engage in conversation with a friend on some of the topics contained in this book, but he dismissed its ideas saying, "that's just skynet." I personally thoroughly enjoy hard A.I. themed sci-fi, however, I definitely understand the author's reluctance to reference those themes in this important work, as it might prevent people from taking it seriously. I was also surprised to see that you only gave the book 3 out of 5 stars.
Regarding „pulling the plug“ of an AI in emergency. . . I have read Max Tegmark‘s book about this topic (Life 3.0) and he explained it so: Once a super-AI breaks out of it’s designed confinement it makes seeding copies of itself all over the world. So shutting down one power facility would be without any consequences at all.And regarding your doubt it would not necessarily be able to built another super AI by itself: A super AI doesn’t have to know about the components itself, it is going to be able to hire people to do stuff for it and people would not even know what project they are working on thanks to compartmentalization.
Brian, just stumbled on this review and found it thoughtful. Curious if you have any recommendations for the same subject matter which you feel is better than this one? Thanks!


I would distinguish between the propositions 'philosophy is pointless' and 'the majority of the work being done under the name of the philosophy departments of modern academia is useless'. The former I would certainly disagree with, yet the latter not.
This is a good, relevant talk: http://www.youtube.com/watch?v=YLvWz9...