Goodreads helps you keep track of books you want to read.
Start by marking “Superintelligence: Paths, Dangers, Strategies” as Want to Read:
Superintelligence: Paths, Dangers, Strategies
Enlarge cover
Rate this book
Clear rating
Open Preview

Superintelligence: Paths, Dangers, Strategies

3.87  ·  Rating details ·  14,167 ratings  ·  1,375 reviews
Superintelligence asks the questions: what happens when machines surpass humans in general intelligence? Will artificial agents save or destroy us? Nick Bostrom lays the foundation for understanding the future of humanity and intelligent life.

The human brain has some capabilities that the brains of other animals lack. It is to these distinctive capabilities that our speci
Hardcover, 352 pages
Published September 3rd 2014 by Oxford University Press (first published July 3rd 2014)
More Details... Edit Details

Friend Reviews

To see what your friends thought of this book, please sign up.

Reader Q&A

To ask other readers questions about Superintelligence, please sign up.
Popular Answered Questions
Ben Pace I am quite unsure what you are imagining when you imply that an AI can have no goals. If the seed AI only wanted to improve itself, that would be it's…moreI am quite unsure what you are imagining when you imply that an AI can have no goals. If the seed AI only wanted to improve itself, that would be it's goal. An AI with no goals does nothing... It's just a rock.

Maybe you had something else in mind. I do not know if you have reached argument for the following statement, but it is argued that if a super intelligent AI has a goal, that AI's goal tends to entirely shape the future. If you make an super intelligent AI without goals, someone else can come along and make an AI with goals, and unless that person has done a helluva lot of work on deciding on the goals, it is also argued that things will be very very bad.(less)
John Park Almost certainly part of the publisher's publicity package. (I don't think Goodreads employs any editors.)…moreAlmost certainly part of the publisher's publicity package. (I don't think Goodreads employs any editors.)(less)

Community Reviews

Showing 1-30
Average rating 3.87  · 
Rating details
 ·  14,167 ratings  ·  1,375 reviews

More filters
Sort order
Start your review of Superintelligence: Paths, Dangers, Strategies
Superintelligence was published in 2014, and it's already had time to become a cult classic. So, with apologies for being late getting to the party, here's my two cents.

For people who still haven't heard of it, the book is intended as a serious, hard-headed examination of the risks associated with the likely arrival, in the short- to medium-term future, of machines which are significantly smarter than we are. Bostrom is well qualified to do this. He runs the Future of Humanity Institute at Oxfor
Brian Clegg
Jul 01, 2014 rated it liked it
There has been a spate of outbursts from physicists who should know better, including Stephen Hawking, saying ‘philosophy is dead – all we need now is physics’ or words to that effect. I challenge any of them to read this book and still say that philosophy is pointless.

It’s worth pointing out immediately that this isn’t really a popular science book. I’d say the first handful of chapters are for everyone, but after that, the bulk of the book would probably be best for undergraduate philosophy st
Jan 08, 2015 rated it it was ok  ·  review of another edition
Preamble:I confess to Roko's Basilisk that I didn't believe superintelligence was much of a threat, and this book didn't really do anything to challenge that prior. Mea Culpa, Mea Culpa, Mea [local] Maxima Culpa.

I. Overall View
I'm a software engineer with some basic experience in machine learning, and though the results of machine learning have been becoming more impressive and general, I've never really seen where people are coming from when they see strong superintelligence just around the co
Riku Sayuj
Imagine a Danger (You may say I'm a Dreamer)

Bostrom is here to imagine a world for us (and he has batshit crazy imagination, have to give him that). The world he imagines is a post-AI world or at least a very-near-to-AI world or a nascent-AI world. Don’t expect to know how we will get there - only what to do if we get there and how to skew the road to getting there to our advantage. And there are plenty of wild ideas on how things will pan out in that world-in-transition, the ‘routes’ bit - Bost
Leonard Gaya
Feb 09, 2017 rated it really liked it  ·  review of another edition
In recent times, prominent figures such as Stephen Hawking, Bill Gates and Elon Musk have expressed serious concerns about the development of strong artificial intelligence technology, arguing that the dawn of super-intelligence might well bring about the end of mankind. Others, like Ray Kurzweil (who, admittedly, has gained some renown in professing silly predictions about the future of the human race), have an opposite view on the matter and maintain that AI is a blessing that will bestow utop ...more
As a software developer, I've cared very little for artificial intelligence (AI) in the past. My programs, which I develop professionally, have nothing to do with the subject. They’re dumb as can be and only following strict orders (that is rather simple algorithms). Privately I wrote a few AI test programs (with more or less success) and read a articles in blogs or magazines (with more or less interest). By and large I considered AI as not being relevant for me.

In March 2016 AlphaGo was introdu
☘Misericordia☘ ⚡ϟ⚡⛈⚡☁ ❇️❤❣
Hypothetical enough to become insanely dumb boring. Superintelligence, hyperintelligence, hypersuperintelligence…

Basically, it all amounts to the fact that maybe, sometime, the ultimate thinking machines will do or not so something. Just how new is that idea? IMO, the main point is how do we get them there?

Designing intuition? Motivating the AI? Motivational scaffolding? Associative value accretion? While it's all very entertaining, it's nowhere near practical at this point. And the bareboned
John Igo
Apr 24, 2015 rated it liked it
Shelves: audio-book
This book...

if {}
else if {}
else if {}
else if {}
else if {}

You can get most of the ideas in this book in the WaitButWhy article about AI.

This book assumes that an intelligence explosion is possible, and that it is possible for us to make a computer whose intelligence will explode. Then talks about ways to deal with it.
A lot of this book seems like pointless naval gazing, but I think some of it is worth reading.

Manuel Antão
Jul 07, 2018 rated it it was amazing
Shelves: 2018
If you're into stuff like this, you can read the full review.

(Count-of-Self) = 0: "Superintelligence - Paths, Dangers, Strategies" by Nick Bostrom

"Box 8 - Anthropic capture: The AI might assign a substantial probability to its simulation hypothesis, the hypothesis that it is living in a computer simulation."

In "Superintelligence - Paths, Dangers, Strategies" by Nick Bostrom

Would you say that the desire to preserve 'itself' comes from the possession of a (self) consciousness? If so, does the acqu
I'm very pleased to have read this book. It states, concisely, the general field of AI research's BIG ISSUES. The paths to making AIs are only a part of the book and not a particularly important one at this point.

More interestingly, it states that we need to be more focused on the dangers of superintelligence. Fair enough! If I was an ant separated from my colony coming into contact with an adult human being, or a sadistic (if curious) child, I might start running for the hills before that magni
Jasmin Shah
Sep 04, 2014 rated it it was amazing
Recommends it for: AI Enthusiasts
Recommended to Jasmin by: Elon Musk
Never let a Seed AI read this book!
Clif Hostetler
Dec 30, 2018 rated it liked it
Recommended to Clif by: Coltyn Seifert
Shelves: current-events
This book was published in 2014 so is a bit dated, and I’m now writing this review somewhat late for what should be a cutting edge issue. But many people who are interested in this subject continue to respect this book as the definitive examination of the risks associated with machines that are significantly smarter than humans.

We have been living for many years with computers—and even phones—that store more information and can retrieve that information faster than any human. These devices don’
Robert Schertzer
I switched to the audio version of this book after struggling with the Kindle edition since I needed to read this for a book club. If you are looking for a book on artificial intelligence (AI), avoid this and opt for Jeff Hawkins' book "On Intelligence" written by someone who has devoted their life to the field. If it is one on "AI gone bad" you seek, try 2001 Space Odyssey. For a fictional approach on AI that helped set the groundwork for AI theory, go for Isaac Asimov. If you want a tedious, r ...more
Mar 02, 2015 rated it really liked it
Superintelligence by Nick Bostrom is a hard book to recommend, but is one that thoroughly covers its subject. Superintelligence is a warning against developing artificial intelligence (AI). However, the writing is dry and systematic, more like Plato than Wired Magazine. There are few real world examples, because it's not a history of AI, but theoretic conjectures. The book explores the possible issues we might face if a superintelligent machine or life form is created. I would have enjoyed the b ...more
Shea Levy
Read up through chapter 8. The book started out somewhat promisingly by not taking a stand on whether strong AI was imminent or not, but that was the height of what I read. I'm not sure there was a single section of the book where I didn't have a reaction ranging from "wait, how do you know that's true?" to "that's completely wrong and anyone with a modicum of familiarity with the field you're talking about would know that", but really it's the overall structure of the argument that led me to gi ...more
I'm not going to criticize the content. I cannot finish this. Imagine eating saltines when you have cotton mouth in the middle of the desert. You might be close to describing how dry the writing is. Could be very interesting read if the writing was done in a more attention grabbing way. ...more
Blake Crouch
Dec 14, 2018 rated it it was amazing
The most terrifying book I've ever read. Dense, but brilliant. ...more
Clare O'Beara
Feb 12, 2016 rated it really liked it
Shelves: non-fiction, i-t, science
We are now building superintelligences. More than one. The author Nick Bostrom looks at what awaits us. He points out that controlling such a creation might not be easy. If unfriendly superintelligence comes about, we won't be able to change or replace it.
This is a densely written book, with small print, with 63 pages of notes and bibliography. In the introduction the author tells us twice that it was not easy to write. However he tries to make it accessible, and adds that if you don't understa
Jun 17, 2018 rated it really liked it
Like a lot of great philosophy, Superintelligence acts as a space elevator: you make many small, reasonable, careful movements - and you suddenly find yourself in outer space, home comforts far below. It is more rigorous about a topic which doesn't exist than you would think possible.

I didn't find it hard to read, but I have been marinating in tech rationalism for a few years and have absorbed much of Bostrom secondhand so YMMV.

I loved this:
Many of the points made in this book are probably w
Diego Petrucci
Dec 28, 2014 rated it it was amazing
There's no way around it: a super-intelligent AI is a threat.

We can safely assume that an AI smarter than a human, if developed, would accelerate its own development getting smarter at a rate faster than anything we'd ever seen. In just a few cycles of self-improvement it would spiral out of control. Trying to fight, or control, or hijack it, would be totally useless — for a comparison, try picturing an ant trying to outsmart a human being (a laughable attempt, at best).

But why is a super-intell
Sep 22, 2014 rated it liked it
Shelves: mind, ai
An extraordinary achievement: Nick Bostrom takes a topic as intrinsically gripping as the end of human history if not the world and manages to make it stultifyingly boring.
Rod Van Meter
May 29, 2015 rated it really liked it
Is the surface of our planet -- and maybe every planet we can get our hands on -- going to be carpeted in paper clips (and paper clip factories) by a well-intentioned but misguided artificial intelligence (AI) that ultimately cannibalizes everything in sight, including us, in single-minded pursuit of a seemingly innocuous goal? Nick Bostrom, head of Oxford's Future of Humanity Institute, thinks that we can't guarantee it _won't_ happen, and it worries him. It doesn't require Skynet and Terminato ...more
Mar 22, 2021 rated it really liked it  ·  review of another edition
Artificial General Intelligence (AGI) will recursively improve itself, leading to a technological singularity and unpredictable changes to human civilization. Low probability combined with high impact generates a risk which certainly makes one wonder about the backgrounds.

Academic philosopher Nick Bostrom is by far not the first to argue about the singularity, but he makes a high effort to imagine how it would come so far, what we can do about it, and what the consequences would be. He works as
Brendan Monroe
Reading this was like trying to wade through a pool of thick, gooey muck. Did I say pool? I meant ocean. And if you don't keep moving you're going to get pulled under by Bostrom's complex mathematical formulas and labored writing and slowly suffocate.

It shouldn't have been this way. I went into it eagerly enough, having read a little recently about AI. It is a fascinating subject, after all. Wanting to know more, I picked up "Superintelligence".

I could say my relationship with this book was ak
Michael Perkins
May 06, 2021 rated it really liked it
Before this book, I read the excellent book on A.I., Human Compatible (link below).

Though Superintelligence came out first, I treated it as companion volume to Compatible. This author explores several scary what-if A.I. takeover scenarios that are not included in Compatible.

Here's the best review of this book by a true expert....

My review of Human Compatible...
Tammam Aloudat
Feb 12, 2018 rated it really liked it
Shelves: non-fiction
This is at the same time a difficult to read and horrifying book. The progress that we may or will see from "dumb" machines into super-intelligent entities can be daunting to take in and absorb and the consequences can range from the extinction of human life all the way to a comfortable and effortlessly meaningful one.

The first issue with the book is the complexity. It is not only the complexity of the scientific concepts included, one can read the book without necessarily fully understanding th
Richard Ash
Nov 16, 2015 rated it really liked it
Shelves: computers
A few thoughts:
1. Very difficult topic to write about. There's so much uncertainty involved that it's almost impossible to even agree on the basic assumptions of the book.
2. The writing is incredibly thorough, given the assumptions, but also hard to understand. You need to follow the arguments closely and reread sections to fully understand their implications.

Overall, interesting and thought-provoking book even though the basic assumptions are debatable

P.S. (6 months later) Looking back on this
Jul 30, 2018 rated it really liked it
Shelves: technology, audiobook
81st book for 2018.

In brilliant fashion Bostrom systematically examines how a super-intelligence arise over the coming decades, and what humanity might do to avoid disaster. Bottom-line: Not much.

Nov 07, 2016 rated it liked it
Shelves: 2016, science-tech
More detail than I needed on the subject, but I might rue that statement when the android armies are swarming Manhattan.

JK... for now.
Jan 27, 2015 rated it liked it
The idea of artificial superintelligence (ASI) has long tantalized and taunted the human imagination, but only in recent years have we begun to analyze in depth the technical, strategic, and ethical problems of creating as well as managing advanced AI. Nick Bostrom’s Superintelligence: Paths, Dangers, Strategies is a short, dense introduction to our most cutting-edge theories about how far off superintelligence might be, what it might look like if it arrives, and what the consequences might be f ...more
« previous 1 3 4 5 6 7 8 9 next »
topics  posts  views  last activity   
Goodreads Librari...: Book introduction in Polish 3 15 Jun 19, 2019 08:19AM  
deluge 1 11 Jun 09, 2018 04:57PM  
Madison Bibliovores: February Book 1 6 Jan 15, 2016 09:21PM  
The Aspiring Poly...: Superintelligence 1 36 Nov 03, 2014 10:42AM  

Readers also enjoyed

  • Life 3.0: Being Human in the Age of Artificial Intelligence
  • How to Create a Mind: The Secret of Human Thought Revealed
  • The Singularity is Near: When Humans Transcend Biology
  • The Precipice: Existential Risk and the Future of Humanity
  • The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World
  • Human Compatible: Artificial Intelligence and the Problem of Control
  • Our Final Invention: Artificial Intelligence and the End of the Human Era
  • Rationality: From AI to Zombies
  • Our Mathematical Universe: My Quest for the Ultimate Nature of Reality
  • Structures: Or Why Things Don't Fall Down
  • Algorithms to Live By: The Computer Science of Human Decisions
  • AI Superpowers: China, Silicon Valley, and the New World Order
  • Doing Good Better: How Effective Altruism Can Help You Make a Difference
  • Homo Deus: A History of Tomorrow
  • Superforecasting: The Art and Science of Prediction
  • Making Sense
  • On Intelligence
  • The Black Swan: The Impact of the Highly Improbable
See similar books…

Goodreads is hiring!

If you like books and love to build cool products, we may be looking for you.
Learn more »
See top shelves…
Nick Bostrom is Professor at Oxford University, where he is the founding Director of the Future of Humanity Institute. He also directs the Strategic Artificial Intelligence Research Center. He is the author of some 200 publications, including Anthropic Bias (Routledge, 2002), Global Catastrophic Risks (ed., OUP, 2008), Human Enhancement (ed., OUP, 2009), and Superintelligence: Paths, Dangers, Stra ...more

News & Interviews

  Jordan Morris is a comedy writer and podcaster whose credits include @Midnight, Unikitty! and Earth to Ned.  The sci-fi comedy Bubble is his...
47 likes · 11 comments
“Far from being the smartest possible biological species, we are probably better thought of as the stupidest possible biological species capable of starting a technological civilization - a niche we filled because we got there first, not because we are in any sense optimally adapted to it.” 71 likes
“Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an “intelligence explosion,” and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.” 28 likes
More quotes…