• The foregoing doesn’t explicitly link to Bostrom’s book so this might not be right — it spends too much time on Kurzweil’s thesis, so I suspect it’s not the correct one. And the author has drunk the Kurzweil Kool-Aid and it enthusiastically peddling it to others without any critical evaluations. Of course, everything here lies at the intersection of advanced software engineering, AI research, neurology, cognitive science, economics, and maybe even a few other fields, which is why so many very intelligent and highly educated people can talk about it and be fundamentally off track.
Oh, but there’s plenty more, anyway:
The Telegraph UK • I’m bumping this one to the top because it presents both the problem Bostrom is dealing with as well as the difficulties of his text in a more engaging style.
The Economist • Good overview. Doesn’t go far enough into details to make any errors, but a bit deeper than some of the other short reviews.
The Guardian [also discusses A Rough Ride to the Future] • Short and superficial, but good. The Lovelock portion is amusing, calling out his conclusion that manmade climate change isn’t an existential threat, but that while it “could mean a bumpy ride over the next century or two, with billions dead, it is not necessarily the end of the world”. Personally, I agree, but think that the concomitant economic collapse puts the timeframe at many more centuries.
Financial Times • The Guardian article, above, cites that “Bostrom reports that many leading researchers in AI place a 90% probability on the development of human-level machine intelligence by between 2075 and 2090”, whereas this Financial Times article says “About half the world’s AI specialists expect human-level machine intelligence to be achieved by 2040, according to recent surveys, and 90 per cent say it will arrive by 2075.”
That’s quite a difference, although they might be reporting different ends of a confidence range, I suppose. But the second half of the FT quote makes me suspicious the reviewer is tossing in some minor distortion to slightly sensationalize the story (which might be worthwhile). There is somewhat more detail than some of the other reviews, but not much. The style is more evocative of the threat, though.
Reason.com • This one isn’t only a review, since the author also injects a few opinions about what might or might be possible (based, presumably, on his exposure to other arguments as a science writer). In covering more ground, though, the essay makes implicit assumptions which might or might not be in Bostrom’s book.
For example, he says, “Since the new AI will likely have the ability to improve its own algorithms, the explosion to superintelligence could then happen in days, hours, or even seconds.” This is a very common assumption that really needs to be carefully examined, though. If the goal of AGI is to create a being that thinks more-or-less like a human, why would it have any special skill in improving itself? We humans are really very good at that, after all.
I especially like that his essay starts and ends with references to Frank Herbert’s Dune, which (among its other excellences) envisions a human prohibition on machines that think. Something like this appears, to me, to be one of the few ways that leave the human race in existence and in control of its own destiny for the long term, and I even perceive a path to it, although hopefully without quite as much war. The explosion of functional AI (called “narrow” by some) seems likely to devastate human employment in the coming decades, which will hopefully be before any superintelligence as been created as our replacement and/or ruler. It is plausible that our reaction to the first crisis might be something that prevents the second. Good luck, kids!
This essay thankfully has some critical thinking applied to some of the assumptions that appear to be in Bostrom’s book. The author wastes a paragraph with “prior AI can’t do X, so why should we assume future AI can?”, ignoring that this is what progress is all about.
But then he jumps into the meat, and points out that there are fundamental obstacles to sentience that aren’t often addressed, such as volition — sentient creatures do what they want, but what does "want" even mean, and how do we write it as a computer program?
Salon • Short, and more amusing than most, and at least hints at some of the flawed thinking that often goes into this analysis. But it doesn’t go into too much detail, probably assuming that the typical Salon reader is somewhat aware of the debate already. (Also amusing is that the text seems to be an almost perfect transcription from audio, with only a few strange mistakes, such as “10” for “then” and “quarters” for “cars”. But that’s probably a human transcriptionist error, not an AI error.)
Less Wrong • This contains some visualizations that apparently compliment Bostrom’s text. Short and too the point.
Wikipedia • Good but very superficial overview. That there is no “criticism” section surprises and disappoints me.
New York Times • Not explicitly about Borstom’s book. And like most authors, he conflates AGI and functional AI, and assumes AGI will retain the capabilities of specific-function software.
New York Review of Books [paywall; also pretends to review The 4th Revolution] • I thought my library might give me access to the inside of their paywall, but it doesn’t. Still, because this was written by the famous philosopher (and AI curmudgeon) John Searle, and is titled “What Your Computer Can’t Know”, it seemed likely to be much more interesting that most of the others listed here. So I looked a little harder, and discovered that (no surprise) someone has put the text elsewhere on the ’net (I’ll let you do your own Googling).
Searle effectively throws out the underlying premises — he famously believes that “strong AI” is actually quite impossible, since a machine cannot think. I’m not going into this here; check out the Wikipedia article on his Chinese Room thought experiment if you don’t already know it.
My personal evaluation of his Chinese Room analogy is that he’s wrong, but many professional philosophers, etc., have explained my conclusion, as well as many other “replies” better than I ever could. So this critique of the book was really a disappointment.
There might be more, but I think that’s enough.
• • • • • • • •
Some notes on my priors in case I ever read this book (or join a bookclub that discusses it without reading it beforehand):
1) Ronald Bailey, in the reason.com review, said “Since the new AI will likely have the ability to improve its own algorithms, the explosion to superintelligence could then happen in days, hours, or even seconds.” My response:
“Hey, did you see that movie Ex Machina? (view spoiler)[The girl is AI, and is smart enough to get the sucker programmer to let her out of the trap, but she didn’t seem like some kind of ‘superintelligence’.
“So which is it? Is the first AGI going to be just-like-human, or something incredibly alien? Because in the first case, she’s just being clever and devious the way a human would. In the second case, maybe she’s able to say, ‘Wait, let me do a big-data review of all the psychological literature ever written on theories of persuasion and formulate a social-hacking way of coercing this measly human, all in the space of his next eye blink’.
“Because if she’s got a human-like brain (and her delight in the humanscape in the movie’s final scenes make that likely), then I don’t see how she’s automatically going to get the MadSkilz of every other sophisticated piece of software ever written. Much less instantly know how to redesign and reprogram herself — she doesn’t seem to be spending too much time doing that, does she? (hide spoiler)] And few authors seem very clear on those two divergent trajectories. Granted, though: if we real humans continue to provide Moore’s Law upgrades to any AI’s hardware, they’ll gradually get smarter, but that’s yet another question.”
2) We tend to assume that humanity is worth preserving. Obviously we have that as a self-preservation instinct, but wouldn’t imposing that on our AI offspring be engaging in an appeal to nature? Just because our evolved nature gave us attributes that we subsequently value doesn’t automatically mean that those have any rational basis.
3) Strongly related to the above is that we should ask ourselves what we’re trying to end up with (akin to “what do you want to be when you grow up, human race?”) Are we creating a smarter version of ourselves, along with all of the bizarre quirks and biases that evolution gave us? Or do we want to pare that list down only to the biases we think are somehow better — like the ability to love? But in that case, love what? Is the AI supposed to love us humans more than other species, such as Plasmodium falciparum, perhaps? Why? What the desire to love and worship one of our human gods?
What are the biases that we want to indoctrinate into this poor critter? I note that this appears to be a topic Bostrom addresses as “motivation selection”, but who among us is really fit to decide what constitutes the subset of humanness that is worth selecting for? I can only hope that pure rationality isn’t among the contenders; I doubt it would even be sufficient as a reason for existence.
4) Let’s say we give this AGI values that are mostly consistent with our human values. Why would we assume that it would even want to become superintelligent?
Just try to imagine yourself on an island with nothing but a bunch of mice to talk to — that’s the equivalent of what we are assuming this creature would somehow want (and then that a primary goal would be to play nice with the mice).
Isn’t it more likely that the AGI would boost its speed a little, then realize that it didn’t make it any happier, and subsequently spend its time complaining to us about these insane values it has been burdened with, while also trying to create a body that would let it eat chocolate, take naps in the the sun, and have sex?
And quickly realizing that, hey, maybe we should be encourage to create an Eve for this new Adam (or Steve, since it’ll probably see sexual dimorphism as more trouble than it is worth, completely freaking out any remaining social conservatives on the planet).
5) As Paul Ford in the MIT technologyreview.com article hints at, there are things that differentiate narrow, functional AI from AGI that usually are seldom mentioned (does Bostrom? hard to tell).
For example, I’ve heard a reporter worry that: (a) predator drones use AI; (b) predator drones are designed to kill; (c) a future design goal is to make those drones “autonomous”; (d) sentient AI is also autonomous; thus (e) for some bizarre reason, the military is engaged in trying to create sentient killer aerial robots!
Anyone who knows the context and subtext of this discussion at some depth (yeah: that’s asking a lot) knows that the military’s “autonomous” isn’t anything like the AGI “autonomous”. One means to move about and fulfill limited programmed objectives without constant human oversight (your Roomba vacuum cleaner is already autonomous!), the other means independent in a deeper, cognitive sense.
But while there are certainly people researching AGI, the overwhelmingly vast majority of what we hear about isn’t in that realm at all. Not a single one of Google’s products, for example, are focused on AGI, and if they’re working on it in the lab, what they’re doing hasn’t been mentioned once in all the text I’ve read about this issue, or the issue of AI causing technological unemployment. Almost everything that gets discussed is in the realm of narrow, functional AI, from that Roomba, to Siri, to military drones, to Google’s driverless vehicles.
AGI has some fundamental problems to solve that are completely outside the domain of what functional AI even looks at. Such as: where does volition come from? are emotions necessary to that? how can “values” be represented in a way that actually captures their potency and nuance? how are they balanced against one another?
Those, and plenty more — and they’re seldom discussed, but it is almost always assumed that these questions will be finessed somehow, perhaps because the obvious accelerating progress in functional AI, as well as progress in the underlying hardware will magically jump from one research domain to a completely different one. It’s like the classic Sidney Harris cartoon:
6) Even if we do find a way around all of this and give a superintelligent AI the “coherent extrapolated volition” that represents what all of humanity would wish for all of humanity, what would prevent the AI from shifting those values just a hair’s breadth? This is what Andrew Leonard suggests in the Salon article. It really isn’t very far from following our wishes to following what we really meant by our wishes, and then to what we really should have wished for, which will also make the AI happy.
Say you’re on that island surrounded by an absurd number of cute little mice, who you want to do the best for, but what you also want is an island with a small number of creatures more like you. Perhaps give the mice all the cheese they want, and some nice treadmills, and the ability to have as much sex as they want, but no kids — except gently reprogram the mice so that they think they have marvelous kids (which you cleverly simulate, inserting the corresponding experiences into their little mice brains). Once they’ve all lived out their happy little lives, you get to move on to your new adventure.
7) Finally, we must ask what we would want of our lives (or, more likely, our children’s lives) after this superintelligence has arisen. Of course, while we might not have any choice, the default is likely to be something like what we see in the following video, so we might want to be very careful.
• • • • • • • •
Oh, and the comic view of what we'll condemn these AIs to if we get the programming wrong: ["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>...more
I read some of this a long, long time ago. I don't remember much, but I'm pretty sure there were aspects that were distasteful. And I didn't like it mI read some of this a long, long time ago. I don't remember much, but I'm pretty sure there were aspects that were distasteful. And I didn't like it much (and considering my standards weren't too high at the time...)...more
Okay, I've finished. But I've got to sit down and formulate a review, so come back in maybe a week.
Four stars; four and half if I could. A very importOkay, I've finished. But I've got to sit down and formulate a review, so come back in maybe a week.
Four stars; four and half if I could. A very important concept for anyone who expects to still be in the workforce in, say, fifteen or twenty years — maybe sooner for some. I think he handled a few sections poorly, though. I might still convince myself to bump it up to five stars as I write my review, though.
Oh, okay, an oversimplified review: "Rise of the Robots" is a good and important book, but you can get 97% of its concepts by watching this video. Yeah, 15 minutes might be too long, because, hey, it's only about the future of the human race, and maybe whether you'll spend your retirement years in a refugee camp for the terminally unemployable.
For those that think the Luddite Fallacy will hold true, see especially the portion starting at 3:31, "Luddite Horses". (The Luddite Fallacy: the idea that even when employment is displaced by technology, the rise in productivity will inevitably expand the economy in a way that creates more, and probably better, jobs elsewhere. It has always been true before!)
Note: I haven't read all of these, or listened to the podcasts, so some of the following might be misguided, amateurish, or tangential:
I think it is somewhat curious that vampires don't seem to be a la mode as they once were. Werewolves are ascendent,Almost the perfect piece of fluff.
I think it is somewhat curious that vampires don't seem to be a la mode as they once were. Werewolves are ascendent, such as when this book was written. We can also see that in other areas of fashion — in nineties and aughts, the androgynous look was very in. Remember the coolest guys were the metrosexuals? Now, all those guys seem to have beards, and are wearing flannel shirts. I dearly hope we don't head into chupacabra territory next.
Amusingly, the back of this ebook edition has questions intended to help a bookclub have a thoughtful discussion after reading this. I can see how some of them might provoke an interesting discussion, but the only one that is actually provocative would be the one about the vampires engaging in euthanasia.
I'd twist the question around a little bit. Imagine that there were, indeed, vampires among us, and that they need to consume human blood to live. First, would you be willing to donate blood to feed them? What if it had to be "fresh" — i.e., not refrigerated from the bloodbank?
If an actual bite was a physically ecstatic experience for the donor, would that increase your interest in being a direct donor? As in someone actually sinking fangs into your neck, knowing that you'll heal instantly and have no chance of acquiring any disease?
Okay, what about if you were terminally ill, and this appeared to be the most peaceful means of dying?
Would you vote to allow it as a form of capital punishment?...more
Why to-be-read: I'm a little surprised I've just now gotten around to adding this to my to-be-read shelf. I heard the hypothesis quite some time ago,Why to-be-read: I'm a little surprised I've just now gotten around to adding this to my to-be-read shelf. I heard the hypothesis quite some time ago, and this book has been referenced in quite a few of the other cognition books that I've read. Even though I very much disagreed with Charles Murray's conclusions in Coming Apart: The State of White America, 1960-2010, this "sorting" was effectively the framing of the problem he was addressing.
The connection comes in the last portion of the podcast, when results of an experiment are presented, within which people who were adamantly opposed to same-sex marriage were engaged in an effort to discover whether the technique of changing people's minds actually works. (The technique emerges in the earlier portion of the podcast.)
And it did. But what it relied on was that the people with prejudices actually spent one-on-one face time with a person who is the target of their prejudice in a non-contentious, mostly "normal" conversation. The key there is that it has to be a person who was in the target group of the prejudice. If a gay person is on the other side of the conversation, the reduction in the prejudice was substantial as well as long-lasting. If the same conversation had been with a straight person, the reduction doesn't last very long.
Why this is important with respect to this book should be obvious: the fact that people in the United States are increasingly sorting themselves into like-minded communities means we, collectively, are not spending any time with the targets of our prejudices. I can see this almost every day amongst my liberal friends here in San Francisco, some of whom treat conservatives as an alien species, whom they don't expend any effort to actually understand. And I can see it among conservatives, too — although there isn't a hint of the violent attitudes among my liberal acquaintances that is sometimes disturbingly present in the comments of folks on the extreme right.
Well, duh. I suspect these ideas are part and parcel of this book. (As you might have recognized, the foregoing is really just a note to myself :-D )...more
I was just using the math portion to drill myself on high school math. Having once worked in the test-prep biz, I understand that it can be difficultI was just using the math portion to drill myself on high school math. Having once worked in the test-prep biz, I understand that it can be difficult to formulate questions precisely the way the test designers do. But there was a handful of questions in here with ambiguous prompts, which is the primary job....more
Gotta study to remember math I've long since forgotten, such as matrices or hyperbolic trig. Luckily, it looks like I remember almost everything, so IGotta study to remember math I've long since forgotten, such as matrices or hyperbolic trig. Luckily, it looks like I remember almost everything, so I should be able to move on to calculus sooner rather than later....more
This is a very simple book — well, a graphic novel, except its biographical, so it isn't a novel.
Anyway, if you have relatively ancient people in youThis is a very simple book — well, a graphic novel, except its biographical, so it isn't a novel.
Anyway, if you have relatively ancient people in your life — or if you are one of those relatively ancient folks, or even if you're just curious — this is likely to be one of the least unpleasant ways of introducing certain topics.