Almost all “science fiction” books have at least one element that is critical to the story which is nevertheless fantastiSpoiler addendum added below.
Almost all “science fiction” books have at least one element that is critical to the story which is nevertheless fantastical. The faster-than-light travel and transporters in Star Trek, for example, or the Force (and FTL, and light sabers, etc.) in Star Wars. The sub genre in which this is minimized is “hard science fiction”. Generally, that’s okay. For those who appreciate thoughtful speculative fiction, the greatest affection tends to go to authors who carefully choose one fantastic element and extrapolate a plausible world consistent with that change. There are other authors who specialize in scifi that has a stronger relationship to the thriller genre, too.
Nexus is in a pretty sweet spot on that spectrum. The big fantastic element is the heavy use of nanotechnology, although that stuff is so cool that it is understandably the go-to solution for techno-magic. Anyone familiar with Star Trek TOS will remember how variations on lasers were magic (phasers, photon torpedoes, tractor beams).
But most of the rest of technology was a plausible extrapolation from today. Oh, there were two glaring exceptions that weren’t included: the effects of climate change and the increasing prevalence of AI & robotics. I mean, there were still humans driving cars in 2040! In the San Francisco Bay Area!
But this is an action-packed thriller, too. Fans of military fiction will probably get a big kick out of this. I also enjoyed the not-absurdly unlikely politics. The U.S. government doesn’t come off too well, but that’s probably quite realistic given America’s current trajectory.
I’d definitely recommend this as a quick and easy scifi snack.
Addendum: (view spoiler)[As I mentioned above, the primary fantasy element in this story is nanotechnology. Ironically, scientific news has just come out that hints at how plausible their projection is likely to be. Researchers have just created what is likely to be either the smallest transistor we’re ever likely to see, or at least approximating its magnitude, at 167 picometres in diameter. It’s just a single phthalocyanine molecule (C₃₂H₁₈N₈) surrounded by 12 indium atoms, placed on an indium arsenide crystal. (See the press coverage here or the academic article here. In the article, the caption of the image showing red blood cells states that “around 7,200 of the new transistors could fit on a single cell”. That’s an interesting size, because the 1974-era Intel 8080 was about 6,000 transistors. And while that isn’t very advanced compared to today (state-of-the-art processors are over one billion transistors), if a vast sufficient number of them could be networked, as the book asserts, then it becomes a tiny bit more plausible that a computer could be squeezed in.
Red blood cells are pretty small compared to some neurons, but not all. Red blood cells run about 6 – 8 µm, while the central soma of a neuron varies from 4 to 100 µm. So a microprocessor of roughly the complexity of an Intel 8080 might be able to hide inside of a big neuron.
That still leaves unsaid where and how it gets energy, how it communicates with other neuronal coprocessors and the outside world, and how it detects what its host neuron is actually doing.
But it is a step forward. (hide spoiler)]["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>...more
• The foregoing doesn’t explicitly link to Bostrom’s book so this might not be right — it spends too much time on Kurzweil’s thesis, so I suspect it’s not the correct one. And the author has drunk the Kurzweil Kool-Aid and it enthusiastically peddling it to others without any critical evaluations. Of course, everything here lies at the intersection of advanced software engineering, AI research, neurology, cognitive science, economics, and maybe even a few other fields, which is why so many very intelligent and highly educated people can talk about it and be fundamentally off track.
Oh, but there’s plenty more, anyway:
The Telegraph UK • I’m bumping this one to the top because it presents both the problem Bostrom is dealing with as well as the difficulties of his text in a more engaging style.
The Economist • Good overview. Doesn’t go far enough into details to make any errors, but a bit deeper than some of the other short reviews.
The Guardian [also discusses A Rough Ride to the Future] • Short and superficial, but good. The Lovelock portion is amusing, calling out his conclusion that manmade climate change isn’t an existential threat, but that while it “could mean a bumpy ride over the next century or two, with billions dead, it is not necessarily the end of the world”. Personally, I agree, but think that the concomitant economic collapse puts the timeframe at many more centuries.
Financial Times • The Guardian article, above, cites that “Bostrom reports that many leading researchers in AI place a 90% probability on the development of human-level machine intelligence by between 2075 and 2090”, whereas this Financial Times article says “About half the world’s AI specialists expect human-level machine intelligence to be achieved by 2040, according to recent surveys, and 90 per cent say it will arrive by 2075.”
That’s quite a difference, although they might be reporting different ends of a confidence range, I suppose. But the second half of the FT quote makes me suspicious the reviewer is tossing in some minor distortion to slightly sensationalize the story (which might be worthwhile). There is somewhat more detail than some of the other reviews, but not much. The style is more evocative of the threat, though.
Reason.com • This one isn’t only a review, since the author also injects a few opinions about what might or might be possible (based, presumably, on his exposure to other arguments as a science writer). In covering more ground, though, the essay makes implicit assumptions which might or might not be in Bostrom’s book.
For example, he says, “Since the new AI will likely have the ability to improve its own algorithms, the explosion to superintelligence could then happen in days, hours, or even seconds.” This is a very common assumption that really needs to be carefully examined, though. If the goal of AGI is to create a being that thinks more-or-less like a human, why would it have any special skill in improving itself? We humans are really very good at that, after all.
I especially like that his essay starts and ends with references to Frank Herbert’s Dune, which (among its other excellences) envisions a human prohibition on machines that think. Something like this appears, to me, to be one of the few ways that leave the human race in existence and in control of its own destiny for the long term, and I even perceive a path to it, although hopefully without quite as much war. The explosion of functional AI (called “narrow” by some) seems likely to devastate human employment in the coming decades, which will hopefully be before any superintelligence as been created as our replacement and/or ruler. It is plausible that our reaction to the first crisis might be something that prevents the second. Good luck, kids!
This essay thankfully has some critical thinking applied to some of the assumptions that appear to be in Bostrom’s book. The author wastes a paragraph with “prior AI can’t do X, so why should we assume future AI can?”, ignoring that this is what progress is all about.
But then he jumps into the meat, and points out that there are fundamental obstacles to sentience that aren’t often addressed, such as volition — sentient creatures do what they want, but what does "want" even mean, and how do we write it as a computer program?
Salon • Short, and more amusing than most, and at least hints at some of the flawed thinking that often goes into this analysis. But it doesn’t go into too much detail, probably assuming that the typical Salon reader is somewhat aware of the debate already. (Also amusing is that the text seems to be an almost perfect transcription from audio, with only a few strange mistakes, such as “10” for “then” and “quarters” for “cars”. But that’s probably a human transcriptionist error, not an AI error.)
Less Wrong • This contains some visualizations that apparently compliment Bostrom’s text. Short and too the point.
Wikipedia • Good but very superficial overview. That there is no “criticism” section surprises and disappoints me.
New York Times • Not explicitly about Borstom’s book. And like most authors, he conflates AGI and functional AI, and assumes AGI will retain the capabilities of specific-function software.
New York Review of Books [paywall; also pretends to review The 4th Revolution] • I thought my library might give me access to the inside of their paywall, but it doesn’t. Still, because this was written by the famous philosopher (and AI curmudgeon) John Searle, and is titled “What Your Computer Can’t Know”, it seemed likely to be much more interesting that most of the others listed here. So I looked a little harder, and discovered that (no surprise) someone has put the text elsewhere on the ’net (I’ll let you do your own Googling).
Searle effectively throws out the underlying premises — he famously believes that “strong AI” is actually quite impossible, since a machine cannot think. I’m not going into this here; check out the Wikipedia article on his Chinese Room thought experiment if you don’t already know it.
My personal evaluation of his Chinese Room analogy is that he’s wrong, but many professional philosophers, etc., have explained my conclusion, as well as many other “replies” better than I ever could. So this critique of the book was really a disappointment.
There might be more, but I think that’s enough.
• • • • • • • •
Some notes on my priors in case I ever read this book (or join a bookclub that discusses it without reading it beforehand):
1) Ronald Bailey, in the reason.com review, said “Since the new AI will likely have the ability to improve its own algorithms, the explosion to superintelligence could then happen in days, hours, or even seconds.” My response:
“Hey, did you see that movie Ex Machina? (view spoiler)[The girl is AI, and is smart enough to get the sucker programmer to let her out of the trap, but she didn’t seem like some kind of ‘superintelligence’.
“So which is it? Is the first AGI going to be just-like-human, or something incredibly alien? Because in the first case, she’s just being clever and devious the way a human would. In the second case, maybe she’s able to say, ‘Wait, let me do a big-data review of all the psychological literature ever written on theories of persuasion and formulate a social-hacking way of coercing this measly human, all in the space of his next eye blink’.
“Because if she’s got a human-like brain (and her delight in the humanscape in the movie’s final scenes make that likely), then I don’t see how she’s automatically going to get the MadSkilz of every other sophisticated piece of software ever written. Much less instantly know how to redesign and reprogram herself — she doesn’t seem to be spending too much time doing that, does she? (hide spoiler)] And few authors seem very clear on those two divergent trajectories. Granted, though: if we real humans continue to provide Moore’s Law upgrades to any AI’s hardware, they’ll gradually get smarter, but that’s yet another question.”
2) We tend to assume that humanity is worth preserving. Obviously we have that as a self-preservation instinct, but wouldn’t imposing that on our AI offspring be engaging in an appeal to nature? Just because our evolved nature gave us attributes that we subsequently value doesn’t automatically mean that those have any rational basis.
3) Strongly related to the above is that we should ask ourselves what we’re trying to end up with (akin to “what do you want to be when you grow up, human race?”) Are we creating a smarter version of ourselves, along with all of the bizarre quirks and biases that evolution gave us? Or do we want to pare that list down only to the biases we think are somehow better — like the ability to love? But in that case, love what? Is the AI supposed to love us humans more than other species, such as Plasmodium falciparum, perhaps? Why? What the desire to love and worship one of our human gods?
What are the biases that we want to indoctrinate into this poor critter? I note that this appears to be a topic Bostrom addresses as “motivation selection”, but who among us is really fit to decide what constitutes the subset of humanness that is worth selecting for? I can only hope that pure rationality isn’t among the contenders; I doubt it would even be sufficient as a reason for existence.
4) Let’s say we give this AGI values that are mostly consistent with our human values. Why would we assume that it would even want to become superintelligent?
Just try to imagine yourself on an island with nothing but a bunch of mice to talk to — that’s the equivalent of what we are assuming this creature would somehow want (and then that a primary goal would be to play nice with the mice).
Isn’t it more likely that the AGI would boost its speed a little, then realize that it didn’t make it any happier, and subsequently spend its time complaining to us about these insane values it has been burdened with, while also trying to create a body that would let it eat chocolate, take naps in the the sun, and have sex?
And quickly realizing that, hey, maybe we should be encourage to create an Eve for this new Adam (or Steve, since it’ll probably see sexual dimorphism as more trouble than it is worth, completely freaking out any remaining social conservatives on the planet).
5) As Paul Ford in the MIT technologyreview.com article hints at, there are things that differentiate narrow, functional AI from AGI that usually are seldom mentioned (does Bostrom? hard to tell).
For example, I’ve heard a reporter worry that: (a) predator drones use AI; (b) predator drones are designed to kill; (c) a future design goal is to make those drones “autonomous”; (d) sentient AI is also autonomous; thus (e) for some bizarre reason, the military is engaged in trying to create sentient killer aerial robots!
Anyone who knows the context and subtext of this discussion at some depth (yeah: that’s asking a lot) knows that the military’s “autonomous” isn’t anything like the AGI “autonomous”. One means to move about and fulfill limited programmed objectives without constant human oversight (your Roomba vacuum cleaner is already autonomous!), the other means independent in a deeper, cognitive sense.
But while there are certainly people researching AGI, the overwhelmingly vast majority of what we hear about isn’t in that realm at all. Not a single one of Google’s products, for example, are focused on AGI, and if they’re working on it in the lab, what they’re doing hasn’t been mentioned once in all the text I’ve read about this issue, or the issue of AI causing technological unemployment. Almost everything that gets discussed is in the realm of narrow, functional AI, from that Roomba, to Siri, to military drones, to Google’s driverless vehicles.
AGI has some fundamental problems to solve that are completely outside the domain of what functional AI even looks at. Such as: where does volition come from? are emotions necessary to that? how can “values” be represented in a way that actually captures their potency and nuance? how are they balanced against one another?
Those, and plenty more — and they’re seldom discussed, but it is almost always assumed that these questions will be finessed somehow, perhaps because the obvious accelerating progress in functional AI, as well as progress in the underlying hardware will magically jump from one research domain to a completely different one. It’s like the classic Sidney Harris cartoon:
6) Even if we do find a way around all of this and give a superintelligent AI the “coherent extrapolated volition” that represents what all of humanity would wish for all of humanity, what would prevent the AI from shifting those values just a hair’s breadth? This is what Andrew Leonard suggests in the Salon article. It really isn’t very far from following our wishes to following what we really meant by our wishes, and then to what we really should have wished for, which will also make the AI happy.
Say you’re on that island surrounded by an absurd number of cute little mice, who you want to do the best for, but what you also want is an island with a small number of creatures more like you. Perhaps give the mice all the cheese they want, and some nice treadmills, and the ability to have as much sex as they want, but no kids — except gently reprogram the mice so that they think they have marvelous kids (which you cleverly simulate, inserting the corresponding experiences into their little mice brains). Once they’ve all lived out their happy little lives, you get to move on to your new adventure.
7) Finally, we must ask what we would want of our lives (or, more likely, our children’s lives) after this superintelligence has arisen. Of course, while we might not have any choice, the default is likely to be something like what we see in the following video, so we might want to be very careful.
• • • • • • • •
Oh, and the comic view of what we'll condemn these AIs to if we get the programming wrong: ["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>...more
I read some of this a long, long time ago. I don't remember much, but I'm pretty sure there were aspects that were distasteful. And I didn't like it mI read some of this a long, long time ago. I don't remember much, but I'm pretty sure there were aspects that were distasteful. And I didn't like it much (and considering my standards weren't too high at the time...)...more
In the coming years, your job is likely to evaporate. That might mean now, or it might mean twenty-five or thirty years. But unless you’re extraordiIn the coming years, your job is likely to evaporate. That might mean now, or it might mean twenty-five or thirty years. But unless you’re extraordinarily unusual, it’ll happen.
I’m going to start by giving a few examples.
Take the profession of accountancy. I’m oversimplifying, but pretty much what an accountant does is match an entity’s financial information to the appropriate laws and rules, and then provide analysis of how well those match up, and maybe fill out some forms. Guess what? There’s nothing in there that a software program couldn’t do. In fact, many people that don’t make a lot of money already use such software to file their taxes, and every year that software gets a little more sophisticated, and a lot of techie folks use software that leaves all the other accountants doing less and less, year by year. The profession of accountant will likely be almost completely extinct within a decade (long before we see those autonomous cars everyone keeps talking about).
Let’s look at a something much tougher, like a barber or hair stylist. The job there is to examine the client’s features, ask questions about what that client wants, and suggest a style that is both feasible and desirous, and then cut hair to that style. Right now, that is about as far from what a computer could do as any profession in existence.
Well, first, speedy dexterity isn’t something that robots are too good at, except when they can be programmed to do precisely the same thing, over and over again, in which case they do much better than meager humans. And comprehension of a complex visual scene is another really tough computational problem. But if you’ve been following the pace of progress, you know that it is only a matter of time before the robots get there.
There’s a video floating around showing robots failing amusingly (but miserably, and with silly music, so we can feel superior!) during a DARPA challenge that folks are getting a kick out of. Recall, however, how very recently the idea of a robot walking around on two feet would have been absurd. Now we laugh because they sometimes fall down while trying to open doors or climb stairs or get into cars. Given the many millions going into research, how long do you think that will last?
A vast database could already be built of head shapes, facial and hair features, just by looking at the treasure trove of images already accessible via the world wide web. AI that learns which of those are considered comical and which attractive would still be a challenge, but is probably an easier task than programming Watson was for IBM. Programming a hair-cutting robot with the knowledge of what set of snips will create the desired look would be even easier, since it could be endlessly simulated purely in virtual space.
Yeah, it will take years before we see this happen, but that just means it will be at the tail end of the tsunami instead of at the beginning, where the accountants are already feeling vulnerable. (This makes me wonder, how many out-of-work accountants will be able to get jobs as hair dressers?)
There are some jobs that, as far as we can tell, are completely out of range of the robots and their AI software, but that number will get smaller and smaller over the decades, as engineers learn to make the software more sophisticated and the hardware it runs on continues to get faster.
The real sweet spot for humans is to be truly creative. That doesn’t mean anyone in a “creative field” gets a pass, however. AI is already composing quotidian music and doing the rote job of journalists. Being really creative means knowing when and how to break the rules in a way that is fundamentally unexpected. A computer never would have created John Cage’s 4’33”, for example.
The work of Thomas Kuhn, whose The Structure of Scientific Revolutions made the word “paradigm” the cliché it is today, illustrates this. Most science, like most creativity, exists within a paradigm that people in the field understand. Most “normal science”, like most normal creativity, doesn’t bust out of that paradigm. Highly sophisticated software can be taught that paradigm, and how to explore it’s domain, and how to evaluate whether the result of those exploration are consistent with other highly-regarded results.
How this revolution is progressing is what Rise of the Robots: Technology and the Threat of a Jobless Future is all about.
Now, you might be skeptical. This does sound, after all, like the Luddite Fallacy, doesn’t it? If you don’t know the term, it refers to the time at the beginning of the industrial revolution when crafts folk that used hand looms to weave cloth tried to keep the innovation machine looms from making them redundant. The “fallacy” part is because there have always been compensatory effects — some people lose their careers, but the gains in technological capacity and productivity make other forms of production possible, employing even more people.
So why is this time so different? Because what the machines are replacing is different.
￼ The simple machines replaced work that was dirty and dangerous. In the past century, more sophisticated machines replaced work that was dull — those robots that bolt together auto bodies, for example, replaced large numbers of men who used to get pretty good wages for doing an unremittingly boring job.
But today, machines are replacing our minds, not our muscles. More importantly, it is very unlikely that some vast new field of economic activity will suddenly appear on the horizon that will employ all of the workers made redundant — once machines are stronger and faster, more accurate and precise, more patient and (at least) as smart, what kind of job would that be?
If you need more convincing, here’s an analogy. Once upon a time, humans used animals to do our brute labor. It actually took thousands of years for us to arrange that, of course. Before we’d invented the wheel, animals could carry stuff on their backs. Reliable wheels were actually quite a stunning leap forward! Eventually, animals could do most of our hardest labor, except where our brains made us more adaptive to change or subtle details.
But think about what happened when we invented the steam engine. The first practical steam engine came along (as a stunning number of other developments) right near the end of the eighteenth century (which is related to those Luddites were rioting a few decades later). Even though it took millennia for us to learn to use animals, in most ways we’d retired them within a century. The key point is that even though those animal muscles could have still been used, there were effectively no jobs for which they were actually better than machines.
That’s where our brains are about now.
Now, there are still people that don’t believe this is going to happen. For example, in the essay How Technology Is Destroying Jobs, a professor of engineering at MIT states:
❝For that reason, Leonard says, it is easier to see how robots could work with humans than on their own in many applications. “People and robots working together can happen much more quickly than robots simply replacing humans,” he says. “That’s not going to happen in my lifetime at a massive scale. The semiautonomous taxi will still have a driver.”❞
Really? By all indications, autonomous vehicles are already safer than human drivers. Although there are still tricky situations where they could make disastrous choices, they’d still probably have a better overall safety record than us, and they’ll be getting better — we won’t, except with their help. So why would that taxi company want to pay to have a more-fallible human sitting there, bored, to second-guess the computer? It is true that people and robots working together can sometimes do better, but in far too many cases that will be a fairly short interim period, until the software engineers understand what humans are contributing and replace those final aspects — economics will create huge incentives to get the human out of the picture.
First, “step up”. Head for higher intellectual ground.
What’s the flaw here? Well, the top of the pyramid would be a great place, but there simply isn’t much room there. The example given is that, instead of using a biochemist to do a preliminary evaluation on a candidate drug, let the computers do it, and have the biochemist “pick up at the point where the math leaves off”. The difficulty is there is already a researcher doing that, and the computers are replacing the dozens of lower-tier chemists that are doing the simpler work. It’s like telling a sous-chef to “step up” and become the restaurant’s chef de cuisine! That might work for a very small number of very talented sous-chefs, but it won’t work on any large scale at all.
Second, “step aside”. Use skills that can’t be codified.
One example used here is even more absurd than the biochemist example: “Apple’s revered designer Jonathan Ive can’t download his taste to a computer.” Obviously, we can’t all be Jony Ive. But what about that accountant that was mentioned at the beginning? Can’t they learn to use personality skills to be better at interacting with the clients? Sure — but won’t all the accountants want that gig? And being the “human face” of the software might be a safe job for quite some time, it does reflect a de-skilling from the original job. This is also the category for those truly creative types that can consistently deliver outside-the-box thinking that the programmers can’t predict, and can’t be found in correlations within huge datasets.
Third, “step in”. Be the person that double-checks the software for mistakes.
An example given here involved mortgage-application evaluation software that rejected former Fed Reserve chief Ben Bernanke’s mortgage application because it couldn’t properly evaluate his career prospects on the lecture circuit. This will be a pretty sweet job category, but it isn’t because the software will continue to make “mistakes”. It’ll be because the software is taught to recognize unusual situations, and automatically funnels them to human assistants. Like the human co-pilot of an semiautonomous taxicab, there will be a lot of financial incentives to make this a very rare job, though.
Fourth, “step narrowly”. Find a sub-sub-sub-speciality that isn’t economical to automate.
The example in the article shows clearly how narrow these opportunities are: imagine being the person who specializes in matching the sellers and buyers of Dunkin’ Donuts franchises! Yeah, all the real estate agents who hate Zillow.com would love to be that guy, or his equivalent. I like my example better: you know all those Craigslist advertisements for “Two Men and a Van” to help you move furniture? The new version of those is going to be the two workers with the robotic stair-climbing mule. They’ll help city dwellers move from apartment to apartment, with one worker upstairs loading the donkey and another downstairs offloading it. It certainly will take a long time for the robotic economy to replace every little niche.
Finally, the fifth strategy is “step forward”. Write the software that puts your friends and neighbors out of work!
Writing this AI will probably be quite the growth industry for years to come. Unfortunately, it’s a pretty specialized type of programming. And even more unfortunately, there are plenty of programmers in other specialties whose jobs are starting to disappear. For example, setting up a website for a company used to be quite a labor-intensive and remunerative gig, but now there are plenty of automated suites that do the lions share of that, leaving only a job for the rarer “stepped-up” or “stepped-in” person to finish the job. There’s going to be plenty of competition in software field, too, as the simpler jobs are automated away.
What you’ve undoubtedly spotted in those five categories is obvious: while there will still be jobs in existence — and even some new ones — the numbers just won’t add up. When tens or hundreds of thousands of people in a field find their jobs being de-skilled or simply eliminated, the competition for those that remain will be nasty. (Which will drive wages down, ironically.)
There’s a lot more in Ford’s book. I really recommend it.
One thing I want to point out that he got somewhat mostly wrong, though, is in his portion on Artificial General Intelligence, or AGI. It is common for non-specialists to engage in inappropriate metaphorical thinking when talking about AI and robots. The overwhelmingly vast majority of AI and robots that we’re seeing, or will see for a long time, is functional AI — it was designed to fulfill a specific productive function. That is radically and fundamentally different than the research going into AGI, which has the goal of creating software that is as flexible and cognitively complex as the human mind — generalized intelligence.
Just because they’re both computer programs doesn’t mean that they have much in common. Both IBM’s Jeopardy-winning Watson and Google’s autonomous driving software are software programs that run on computers, but if you asked Watson to drive your car, or quizzed one of Google cars with a Jeopardy question, you’ll get no satisfaction. That might seem obvious, but far too often the end-product of AGI is magically given all the skills of any software program ever written. Ford, for example, says on page 232, “A thinking machine would, of course, continue to enjoy all the advantages that computers currently have, including the ability to calculate and access information at speeds that would be incomprehensible for us.” You really should pretty much ignore chapter 9.
Chapter 10, on the other hand, is crucial. The coming century is going to be bad enough with all that Climate Change brouhaha, without the world trying to figure out how an economy works without many or most people having jobs. Science fiction authors have been forecasting dystopian futures for a long time (the one lying behind the story in Peter Watts’ Rifters trilogy is especially harrowing), and we’re really going to want to avoid that. You’ll quickly note that raising the minimum wage doesn’t help — in fact, it creates incentives to automate that much more quickly. Plans that provide a guaranteed minimum income make more sense, although anyone familiar with the political climate in the United States won’t give that much chance of happening.
Frankly, I’ve been telling anyone I care about who has kids to make sure they’ve got the know-how and land to garden, but I’m pretty sure I’m considered an alarmist.
I think it is somewhat curious that vampires don't seem to be a la mode as they once were. Werewolves are ascendent,Almost the perfect piece of fluff.
I think it is somewhat curious that vampires don't seem to be a la mode as they once were. Werewolves are ascendent, such as when this book was written. We can also see that in other areas of fashion — in nineties and aughts, the androgynous look was very in. Remember the coolest guys were the metrosexuals? Now, all those guys seem to have beards, and are wearing flannel shirts. I dearly hope we don't head into chupacabra territory next.
Amusingly, the back of this ebook edition has questions intended to help a bookclub have a thoughtful discussion after reading this. I can see how some of them might provoke an interesting discussion, but the only one that is actually provocative would be the one about the vampires engaging in euthanasia.
I'd twist the question around a little bit. Imagine that there were, indeed, vampires among us, and that they need to consume human blood to live. First, would you be willing to donate blood to feed them? What if it had to be "fresh" — i.e., not refrigerated from the bloodbank?
If an actual bite was a physically ecstatic experience for the donor, would that increase your interest in being a direct donor? As in someone actually sinking fangs into your neck, knowing that you'll heal instantly and have no chance of acquiring any disease?
Okay, what about if you were terminally ill, and this appeared to be the most peaceful means of dying?
Would you vote to allow it as a form of capital punishment?...more
Why to-be-read: I'm a little surprised I've just now gotten around to adding this to my to-be-read shelf. I heard the hypothesis quite some time ago,Why to-be-read: I'm a little surprised I've just now gotten around to adding this to my to-be-read shelf. I heard the hypothesis quite some time ago, and this book has been referenced in quite a few of the other cognition books that I've read. Even though I very much disagreed with Charles Murray's conclusions in Coming Apart: The State of White America, 1960-2010, this "sorting" was effectively the framing of the problem he was addressing.
The connection comes in the last portion of the podcast, when results of an experiment are presented, within which people who were adamantly opposed to same-sex marriage were engaged in an effort to discover whether the technique of changing people's minds actually works. (The technique emerges in the earlier portion of the podcast.)
And it did. But what it relied on was that the people with prejudices actually spent one-on-one face time with a person who is the target of their prejudice in a non-contentious, mostly "normal" conversation. The key there is that it has to be a person who was in the target group of the prejudice. If a gay person is on the other side of the conversation, the reduction in the prejudice was substantial as well as long-lasting. If the same conversation had been with a straight person, the reduction doesn't last very long.
Why this is important with respect to this book should be obvious: the fact that people in the United States are increasingly sorting themselves into like-minded communities means we, collectively, are not spending any time with the targets of our prejudices. I can see this almost every day amongst my liberal friends here in San Francisco, some of whom treat conservatives as an alien species, whom they don't expend any effort to actually understand. And I can see it among conservatives, too — although there isn't a hint of the violent attitudes among my liberal acquaintances that is sometimes disturbingly present in the comments of folks on the extreme right.
Well, duh. I suspect these ideas are part and parcel of this book. (As you might have recognized, the foregoing is really just a note to myself :-D )...more
I was just using the math portion to drill myself on high school math. Having once worked in the test-prep biz, I understand that it can be difficultI was just using the math portion to drill myself on high school math. Having once worked in the test-prep biz, I understand that it can be difficult to formulate questions precisely the way the test designers do. But there was a handful of questions in here with ambiguous prompts, which is the primary job....more