It was a kick. Predictably reminiscent of early Tom Clancy, before he corrupted his technowar thrillers with his naive variation of libertarian politics.
I especially enjoyed how the North Shore Mujahadeen subverted the traditional role the U.S. plays in a conflict, and the exploration of the morality of DirtyHands in guerilla strategy.
There was a little too much U.S.A.-rah-rah, however, with quite a bit of obvious cultural stereotypes.
Oh, and few spoilers: (view spoiler)[A key vulnerability that cripples the U.S. at the beginning is that the microchips sourced from low-bidders came from China, who had compromised the designs. The key phrase was “Each antenna was microscopic, hidden inside a one-millimeter square and activated only by a specific frequency of an incoming missile.”
I'm not an expert, but I do know technology relatively well.
First, circuits looks like cityscapes from a few thousand feet up, and a one-millimeter square would be about as obvious as a football stadium surrounded by parking lots. Security agencies have been studying aerial photographs since forever (you might recall that U-2 aerial photography revealed the distinctive pattern of Soviet missile installations).
I know that there are companies that specialize in back-engineering chips (a college friend worked at one) that shave off the plastic around the silicon chip until they can get images of the circuitry. It seems pretty damn obvious that the U.S. military would use these two very reliable abilities to inspect a representative sample of the chips going into weapon systems.
Second, even if the antennas got into the chip, a one-millimeter antenna is going to be pretty wimpy. A bluetooth antenna is 6mm across its largest dimension. Something that small will only respond to incredibly high frequencies (I think), which are easy to shield. Sure, an incoming missile could be dumping huge amounts of energy into broadcasting a signal, I guess — but it still seems really fishy.
Third, even if the antennas got into the chip, asserting that the associated firmware could also get onto the chip is implausible. Microprocessors typically don't have software on-chip; they get it from RAM and ROM elsewhere in the system. There would have to be dedicated circuitry listening to that antenna, doing signal processing, detecting when a valid signal had been received, and then subverting the rest of the system's behavior — all without ever doing real field-testing. It might not seem like much, but I'm pretty sure the idea is laughable.
Similarly, there's a Security-Badge RFID hack that disrupts the U.S. military offices at the beginning. RFIDs chips are absurdly simple: they use an antenna to receive power, which provides enough energy to do some fairly minimal processing, and then broadcasts a signal at a much, much lower power level.
But here, the RFID chips are sophisticated enough to be doing wardriving, looking for weak wifi signals once inside the building, and then sustaining a connection long enough to upload pretty sophisticated hostile software. Uh, no: the kinds of electronics detection equipment used would never be fooled that something that complex is a security badge, no matter how much it tries to look like one. And since it's going to need a moderately power battery onboard (the power received by an upstream RFID query isn't going to be anywhere near enough) it's going to be very, very obvious. (hide spoiler)]
All in all, a great technowar thriller. If you like that kinda stuff, read it....more
In the coming years, your job is very likely to evaporate. That might mean now, or it might mean twenty-five or thirty years. But unless you’re extraordinarily unusual, it’ll happen.
I’m going to start by giving a few examples.
Take the profession of accountancy. I’m oversimplifying, but pretty much what an accountant does is match an entity’s financial information to the appropriate laws and rules, and then provide analysis of how well those match up, and maybe fill out some forms. Guess what? There’s nothing in there that a software program couldn’t do. In fact, many people that don’t make a lot of money already use such software to file their taxes, and every year that software gets a little more sophisticated, and a lot of techie folks use software that leaves all the other accountants doing less and less, year by year. The profession of accountant will likely be almost completely extinct within a decade (long before we see those autonomous cars everyone keeps talking about).
Let’s look at a something much tougher, like a barber or hair stylist. The job there is to examine the client’s features, ask questions about what that client wants, and suggest a style that is both feasible and desirous, and then cut hair to that style. Right now, that is about as far from what a computer could do as any profession in existence.
Well, first, speedy dexterity isn’t something that robots are too good at, except when they can be programmed to do precisely the same thing, over and over again, in which case they do much better than meager humans. And comprehension of a complex visual scene is another really tough computational problem. But if you’ve been following the pace of progress, you know that it is only a matter of time before the robots get there.
There’s a video floating around showing robots failing amusingly (but miserably, and with silly music, so we can feel superior!) during a DARPA challenge that folks are getting a kick out of. Recall, however, how very recently the idea of a robot walking around on two feet would have been absurd. Now we laugh because they sometimes fall down while trying to open doors or climb stairs or get into cars. Given the many millions going into research, how long do you think that will last?
A vast database could already be built of head shapes, facial and hair features, just by looking at the treasure trove of images already accessible via the world wide web. AI that learns which of those are considered comical and which attractive would still be a challenge, but is probably an easier task than programming Watson was for IBM. Programming a hair-cutting robot with the knowledge of what set of snips will create the desired look would be even easier, since it could be endlessly simulated purely in virtual space.
Yeah, it will take years before we see this happen, but that just means it will be at the tail end of the tsunami instead of at the beginning, where the accountants are already feeling vulnerable. (This makes me wonder, how many out-of-work accountants will be able to get jobs as hair dressers?)
There are some jobs that, as far as we can tell, are completely out of range of the robots and their AI software, but that number will get smaller and smaller over the decades, as engineers learn to make the software more sophisticated and the hardware it runs on continues to get faster.
The real sweet spot for humans is to be truly creative. That doesn’t mean anyone in a “creative field” gets a pass, however. AI is already composing quotidian music and doing the rote job of journalists. Being really creative means knowing when and how to break the rules in a way that is fundamentally unexpected. A computer never would have created John Cage’s 4’33”, for example.
The work of Thomas Kuhn, whose The Structure of Scientific Revolutions made the word “paradigm” the cliché it is today, illustrates this. Most science, like most creativity, exists within a paradigm that people in the field understand. Most “normal science”, like most normal creativity, doesn’t bust out of that paradigm. Highly sophisticated software can be taught that paradigm, and how to explore its domain, and how to evaluate whether the result of those exploration are consistent with other highly-regarded results.
How this revolution is progressing is what Rise of the Robots: Technology and the Threat of a Jobless Future is all about.
Now, you might be skeptical. This does sound, after all, like the Luddite Fallacy, doesn’t it? If you don’t know the term, it refers to the time at the beginning of the industrial revolution when crafts folk that used hand looms to weave cloth tried to keep the innovation machine looms from making them redundant. The “fallacy” part is because there have always been compensatory effects — some people lose their careers, but the gains in technological capacity and productivity make other forms of production possible, employing even more people.
So why is this time so different? Because what the machines are replacing is different.
￼ The simple machines replaced work that was dirty and dangerous. In the past century, more sophisticated machines replaced work that was dull — those robots that bolt together auto bodies, for example, replaced large numbers of men who used to get pretty good wages for doing an unremittingly boring job.
But today, machines are replacing our minds, not our muscles. More importantly, it is very unlikely that some vast new field of economic activity will suddenly appear on the horizon that will employ all of the workers made redundant — once machines are stronger and faster, more accurate and precise, more patient and (at least) as smart, what kind of job would that be?
If you need more convincing, here’s an analogy. Once upon a time, humans used animals to do our brute labor. It actually took thousands of years for us to arrange that, of course. Before we’d invented the wheel, animals could carry stuff on their backs. Reliable wheels were actually quite a stunning leap forward! Eventually, animals could do most of our hardest labor, except where our brains made us more adaptive to change or subtle details.
But think about what happened when we invented the steam engine. The first practical steam engine came along (as a stunning number of other developments) right near the end of the eighteenth century (which is related to those Luddites were rioting a few decades later). Even though it took millennia for us to learn to use animals, in most ways we’d retired them within a century. The key point is that even though those animal muscles could have still been used, there were effectively no jobs for which they were actually better than machines.
That’s where our brains are about now.
Now, there are still people that don’t believe this is going to happen. For example, in the essay How Technology Is Destroying Jobs, a professor of engineering at MIT states:
❝For that reason, Leonard says, it is easier to see how robots could work with humans than on their own in many applications. “People and robots working together can happen much more quickly than robots simply replacing humans,” he says. “That’s not going to happen in my lifetime at a massive scale. The semiautonomous taxi will still have a driver.”❞
Really? By all indications, autonomous vehicles are already safer than human drivers. Although there are still tricky situations where they could make disastrous choices, they’d still probably have a better overall safety record than us, and they’ll be getting better — we won’t, except with their help. So why would that taxi company want to pay to have a more-fallible human sitting there, bored, to second-guess the computer? It is true that people and robots working together can sometimes do better, but in far too many cases that will be a fairly short interim period, until the software engineers understand what humans are contributing and replace those final aspects — economics will create huge incentives to get the human out of the picture.
First, “step up”. Head for higher intellectual ground.
What’s the flaw here? Well, the top of the pyramid would be a great place, but there simply isn’t much room there. The example given is that, instead of using a biochemist to do a preliminary evaluation on a candidate drug, let the computers do it, and have the biochemist “pick up at the point where the math leaves off”. The difficulty is there is already a researcher doing that, and the computers are replacing the dozens of lower-tier chemists that are doing the simpler work. It’s like telling a sous-chef to “step up” and become the restaurant’s chef de cuisine! That might work for a very small number of very talented sous-chefs, but it won’t work on any large scale at all.
Second, “step aside”. Use skills that can’t be codified.
One example used here is even more absurd than the biochemist example: “Apple’s revered designer Jonathan Ive can’t download his taste to a computer.” Obviously, we can’t all be Jony Ive. But what about that accountant that was mentioned at the beginning? Can’t they learn to use personality skills to be better at interacting with the clients? Sure — but won’t all the accountants want that gig? And being the “human face” of the software might be a safe job for quite some time, it does reflect a de-skilling from the original job. This is also the category for those truly creative types that can consistently deliver outside-the-box thinking that the programmers can’t predict, and can’t be found in correlations within huge datasets.
Third, “step in”. Be the person that double-checks the software for mistakes.
An example given here involved mortgage-application evaluation software that rejected former Fed Reserve chief Ben Bernanke’s mortgage application because it couldn’t properly evaluate his career prospects on the lecture circuit. This will be a pretty sweet job category, but it isn’t because the software will continue to make “mistakes”. It’ll be because the software is taught to recognize unusual situations, and automatically funnels them to human assistants. Like the human co-pilot of an semiautonomous taxicab, there will be a lot of financial incentives to make this a very rare job, though.
Fourth, “step narrowly”. Find a sub-sub-sub-speciality that isn’t economical to automate.
The example in the article shows clearly how narrow these opportunities are: imagine being the person who specializes in matching the sellers and buyers of Dunkin’ Donuts franchises! Yeah, all the real estate agents who hate Zillow.com would love to be that guy, or his equivalent. I like my example better: you know all those Craigslist advertisements for “Two Men and a Van” to help you move furniture? The new version of those is going to be the two workers with the robotic stair-climbing mule. They’ll help city dwellers move from apartment to apartment, with one worker upstairs loading the donkey and another downstairs offloading it. It certainly will take a long time for the robotic economy to replace every little niche.
Finally, the fifth strategy is “step forward”. Write the software that puts your friends and neighbors out of work!
Writing this AI will probably be quite the growth industry for years to come. Unfortunately, it’s a pretty specialized type of programming. And even more unfortunately, there are plenty of programmers in other specialties whose jobs are starting to disappear. For example, setting up a website for a company used to be quite a labor-intensive and remunerative gig, but now there are plenty of automated suites that do the lions share of that, leaving only a job for the rarer “stepped-up” or “stepped-in” person to finish the job. There’s going to be plenty of competition in software field, too, as the simpler jobs are automated away.
What you’ve undoubtedly spotted in those five categories is obvious: while there will still be jobs in existence — and even some new ones — the numbers just won’t add up. When tens or hundreds of thousands of people in a field find their jobs being de-skilled or simply eliminated, the competition for those that remain will be nasty. (Which will drive wages down, ironically.)
There’s a lot more in Ford’s book. I really recommend it.
One thing I want to point out that he got somewhat mostly wrong, though, is in his portion on Artificial General Intelligence, or AGI. It is common for non-specialists to engage in inappropriate metaphorical thinking when talking about AI and robots. The overwhelmingly vast majority of AI and robots that we’re seeing, or will see for a long time, is functional AI — it was designed to fulfill a specific productive function. That is radically and fundamentally different than the research going into AGI, which has the goal of creating software that is as flexible and cognitively complex as the human mind — generalized intelligence.
Just because they’re both computer programs doesn’t mean that they have much in common. Both IBM’s Jeopardy-winning Watson and Google’s autonomous driving software are software programs that run on computers, but if you asked Watson to drive your car, or quizzed one of Google cars with a Jeopardy question, you’ll get no satisfaction. That might seem obvious, but far too often the end-product of AGI is magically given all the skills of any software program ever written. Ford, for example, says on page 232, “A thinking machine would, of course, continue to enjoy all the advantages that computers currently have, including the ability to calculate and access information at speeds that would be incomprehensible for us.” You really should pretty much ignore chapter 9.
Chapter 10, on the other hand, is crucial. The coming century is going to be bad enough with all that Climate Change brouhaha, without the world trying to figure out how an economy works without many or most people having jobs. Science fiction authors have been forecasting dystopian futures for a long time (the one lying behind the story in Peter Watts’ Rifters trilogy is especially harrowing), and we’re really going to want to avoid that. You’ll quickly note that raising the minimum wage doesn’t help — in fact, it creates incentives to automate that much more quickly. Plans that provide a guaranteed minimum income make more sense, although anyone familiar with the political climate in the United States won’t give that much chance of happening.
Frankly, I’ve been telling anyone I care about who has kids to make sure they’ve got the know-how and land to garden, but I’m pretty sure I’m considered an alarmist.
I wonder if I'd rather just watch the PBS series (enthusiasically endorsed by the New York Times, but that's tough when you don't own a television andI wonder if I'd rather just watch the PBS series (enthusiasically endorsed by the New York Times, but that's tough when you don't own a television and don't really want to sit in front of the computer watching it online.
I very much enjoyed Steven Johnson's The Ghost Map and Everything Bad is Good for You, so I should probably give this a try.
This is mildly amusing, but sadly informative. I finished Matt Richtel's "Devil's Plaything" recently and just came here to Goodreads to review it, anThis is mildly amusing, but sadly informative. I finished Matt Richtel's "Devil's Plaything" recently and just came here to Goodreads to review it, and was somewhat surprised to see I'd previously given three stars to the author's The Cloud.
The reason I find my surprise informative is that both stories are set in San Francisco, both feature the same protagonist, and I read the other book less than a year ago, but I simply couldn't recall it until I'd read enough other reviews that I rebuilt a sense of what the book was about (I'd been overly sensitive to spoilers, and left my own review too ambiguous).
So I'll 'fess up. The third star is generous, and I guess mostly because the author sets the stories in real San Francisco, not the tourist version thereof. But the protagonist isn't very inspiring, and the plot reaches farther than its grasp. Some folks give this author five stars, so YMMV, but I don't plan on returning to the scene of the crime....more
Matt Richtel, a technology writer for the New York Times, also writes thrilling and provocative science fiction. The Cloud is set in — where else? —Matt Richtel, a technology writer for the New York Times, also writes thrilling and provocative science fiction. The Cloud is set in — where else? — San Francisco and Silicon Valley in the present day, and follows Nat Idle, an investigative reporter, as he painfully uncovers a story that questions the safety of some emerging technology (any more details than that would qualify as spoilers).
Richtel's strong suit is the relentless energy of the plot and, with caveats, the likeability of his characters. On the other side is the over-likeability of those some characters — far too many of them are super-sized and exaggerated to the point of being superheroes. Probably the weakest element of the story is that Richtel throws in too much: there are so many elements to keep track of that it almost becomes necessary to keep notes, and this burden undoubtedly is enough to turn off some readers. The abundance left a few aspects and some characters half-baked. Richtel either needs a longer, more carefully paced book, or he needs to exercise a bit more discipline and get rid of some weeds.
The ultimate answer found in the reporter's quest won't surprise anyone that closely follows criticism of technology, although the danger is elevated here for dramatic emphasis. The only other place where current technology steps over the line into fiction is holography, which has been teasing technophiles for decades now.
The Cloud is a quick read and a quite enjoyable fast-paced adventure. Don't expect too much more and you'll enjoy it. ...more
Disappointing. First, since I’ve read so many books on related topics, much of what Chorost spends time explaining I’ve already long since learned, soDisappointing. First, since I’ve read so many books on related topics, much of what Chorost spends time explaining I’ve already long since learned, so the book felt slower and less intriguing that it probably would for other folks.
But the second reason — and why it barely gets those three stars — is that the author ends up with an almost Pollyanna-ish view of the prospects of integrating the Internet into the human mind.
He pays lip service to the dangers, but doesn’t really do any significant examination of what those threats might be like. For example, he notes that VR pioneer Jaron Lanier warns of “cybernetic totalism” in his You Are Not a Gadget, but dismisses that on the grounds that “the Internet is separate from the human body,” and that a direct connection can “enhance empathy and the direct recognition of another person’s uniqueness.” Uh, well, sure — that’s possible. But isn’t it also quite possible that some folks will get an even more visceral thrill out of bullying or attacking someone with that direct connection?
The problem with the predictions and suggestions in this book are that they universally imagine a pleasant outcome, and then proceed as if that outcome were more than just plausible, but likely, or even guaranteed.
Part of this seems to be due to the author’s clumsy reliance on metaphorical thinking. When imagining how wondrous it will be when human can actually share thoughts, he pauses and notes that granting the thoughts of others access to your own brain is a bit problematic, considering how similar that comes to schizophrenia:
It raises the possibility that even if [a World Wide Mind] could be created, it would present a threat to users’ sanity. However, I think the risk of schizophrenia is not as substantial as it might appear. As I explained earlier, input from others would probably feel distinctly different from one’s own self-motivated brain activity by virtue of its lesser intensity and relative incompleteness. It would no more fool the user than a photo fools the viewer into thinking he is seeing the actual scene.
There is no real reason to believe that the reality of a photo is or is not a reliable predictor of inserted thoughts and emotions beyond its superficial similarity, but that’s as far as he goes with respect to that problem. You could easily expand the “photo” analogy to bring in trompe l’oeil, for example, if you really want to examine the analogy.
But even beyond that, the consumer entertainment industry would undoubtedly be striving mightily to make those impressions “more real than reality,” wouldn’t it? Once those techniques were known, who is to say what malefactors might want to do? I can easily imagine a viral advertisement that sneaks into the brain to make every memory and thought of Disneyland warmer and fuzzier, or changes my taste buds to go positively orgasmic when I suck down a Coca Cola.
These are not the kind of ideas that Michael Chorost has examined in this book. What he has presented is a first peek at that world, and one that is heavily biased towards the positive.
This book was good, but it was either written too early — or perhaps it was written with the wrong perspective.
The basic concept: the author put himseThis book was good, but it was either written too early — or perhaps it was written with the wrong perspective.
The basic concept: the author put himself through as many of the next generation medical tests as he could, in three primary areas: genetics, toxicology, and neurology. Some of these tests are available to the average patient/consumer under limited circumstances, but the majority are out of reach. This might simply be due to cost, but others are still so experimental the implications of their results aren’t even well understood by the scientists, much less doctors and patients.
In theory, what made the book more than just a litany of tests was the personal impact on a human: the author. He worked hard to make us understand when he feared the results, when the test itself was onerous, how he felt when taking a test that might tell him bad news without recourse to treatment. Sometimes that worked, but more often his experiences as “the experimental man” were too distant and abstract. He was and remains, after all, a fundamentally healthy middle-aged man.
The best part of the book was the description of the various tests and the growing realization of how much things are changing. In the next decade or so, these tests will reveal aspects of what is going on inside us that would have been inconceivable just a decade or so ago. How are these very expensive tests going to be made available? Some are already on the consumer market, others require a doctor’s request. But what if the testing companies become like the drug companies and encourage us to push and shove our doctors into requesting tests we might not need? What will this do to already critical health care costs?
The book’s other strong point was when the tests the author took shed light on his brother’s health problems, or on his daughter’s future health. This allowed him to dip his toe into the dilemma of knowledge without power. Some tests partially explained what was ailing his brother, but provided absolutely no promise of help, much less health. Other tests hinted his daughter might face serious problems in the future—but was this knowledge a boon or a burden?
Unfortunately, most of the rest of the book ended up a litany of exams taken for no real reason by a healthy person. Perhaps it was written too early: in a few more years when these tests are closer to having a real impact on a large number of people this would have been a more interesting and informative litany. Or perhaps the perspective was wrong: he could have found other cases similar to his brother’s, involving people with real problems which these tests might soon be able to help with—or at least better illuminate. There would have been much more drama, although perhaps also more heartbreak.
For anyone interested in what kind of medical science we’re heading for, this is still a worthwhile book, despite its limitations.
P.S., for amusement only: I took one of the online cognitive tests pointed to in the book (via www.experimentalman.com) entitled “What’s the Age of Your Brain?” and received the pleasant if somewhat startling result that the brain in my fifty-year-old body is a mere 18 years old.
Executive Summary: don't bother; the Beginner's Guide to the Singularity still needs to be written. (But see "Bonus Points" at end of review for an inExecutive Summary: don't bother; the Beginner's Guide to the Singularity still needs to be written. (But see "Bonus Points" at end of review for an interesting link.)
I was looking forward to liking this book: the title is an obvious reference to the tech singularity, and a good introduction to the subject would have been a useful book.
But this ain't it. First, Dooling spends far too much effort being clever. Now, I don't mind clever: if the author stays on topic, it can be a delightful addition to the right book. For example, Mary Roach does an excellent job of combining a smart-but-goofy sense of humor with her scientific subject matter (although there are definitely folks that don't like her style, either). But Dooling doesn't just toss in cute allusions or snarky footnotes, but entire paragraphs or subchapters wander off topic.
Second, Dooling couldn't decide who his audience is. Someone technical enough to understand all those in-jokes and off-topic nonsense will be bored to tears with explanations of why one should do backups, and will probably be scornful of his assertion that everyday folks need to learn programming languages. (One of the biggest goals of software design is ease-of-use: explicitly trying to get computers to compensate for human limits. But Dooling wants everyone to learn to program because a computer of the future, uh, "will have a sentimental fondness for its mother tongue." Astonishingly errant nonsense.)
Many of those same clever jokes are going to leave the average non-technical reader confused, or worse: distracted. Translating an Emily Dickinson poem into the programming language Python was vaguely amusing, but it only held my attention because I'm enough of a programmer that I tried to actually decipher how Python compares with the many programming languages I know. For the average reader: bewildering waste of time.
Third, he couldn't quite decide what the book was about. Is it about the singularity? Well, some chapters more-or-less stick to that subject. But why is that intermixed with his fondness for Unix and command line interpreters, or his biases towards text editors over word processors? Or the book-ending digression into something about religion, cognition, evolution and flying spaghetti monsters?
Chapter Ten is titled "Be Prepared!" and attempts, clumsily, to tell us how to get ready for the time when technology will change everything, even if it isn't as apocalyptic as Kurzweil's vision of the singularity. It isn't too well thought out (this is the chapter that, among other things, tells folks to learn to program), but I suspect a fuzzy notion of such preparation is how he was able to convince himself that discussions of Open Source software and Post-Rapture Religion would be useful. They aren't.
There were definitely good points in the book. He clearly did quite a bit of research, so there are quotes galore to lead the interested reader to further study. And he tosses in a silly story about how Dad and Son, needing to keep a play date with their World of Warcraft buddies, have to deceive and manipulate Mom who simply doesn't get it. Fun, but not actually useful.
The only portion of the book that I really enjoyed was the reminder that Bill Joy ("The Other Bill") wrote a cautionary article on the future for Wired Magazine back in April 2000 (see the Technology concerns subheading in Joy's Wikipedia page, or the article's Wikipedia page, or the article itself). Many foolishly focused on Joy's depiction of runaway nanotechnology (the "grey goo scenario"), but I was more impressed by his nightmares over "KMD": knowledge-enabled mass destruction. Global destruction by out-of-control Von Neumann machine is quite unlikely, but the inexorably descending barriers to some destructive technologies (such as genetic engineering -- the "knowledge") will enable future terrorism far worse than we've ever seen. Dooling also reminds us that Theodore Kaczynski -- the Unabomber -- wrote scathingly and brilliantly on the technological future. (I have always resented Bill Joy because I was forced to learn and use Sendmail, but I have since learned that he isn't responsible for that atrocity, so I guess now I only resent him because he's a tech millionaire.)
But even that chapter ends poorly when Dooling compares the dark side of tech to research and development of atomic weapons, and proceeds to ham-fistedly distort the era's complex social history as well as the motivations of the scientists. Grossly oversimplifying such a fraught time to provide a poorly thought-out lesson and a bit of trivial entertainment was very distasteful.
OK, bonus points for providing this link to Paul Boutin's blog essay "Biowar for Dummies". Definitely worth reading. ...more
If this book had just arrived as a scifi thriller, I might have given it five stars. But it has been hailed as a novel of revolutionary vision, and IIf this book had just arrived as a scifi thriller, I might have given it five stars. But it has been hailed as a novel of revolutionary vision, and I think that's mostly bogus. But don't get me wrong: this is an exciting book and an excellent first novel. Suarez tells his story with an insider's understanding of modern computer technology which makes it a special delight for folks with a similar background. And the basic idea of software bots that activate on cue and interfere with society is brilliant and scary — and more realistically scary than the techno thrillers of Tom Clancy, Dan Brown or the especially clueless Matthew Reilly.
Here's the executive summary: (no more of a spoiler than the blurbs)
A multimillionaire genius computer program (read: mad scientist-type) dies and strange events start hitting the headlines. It seems he was a bit angry and a bit crazy, and had planted a very complex distributed AI system in computers throughout the internet. These watch for his obituary in news feeds and then start wreaking global havoc.
Non-geeks will probably find the high tech-quotient a burden to deal with — it does sound like so much gobbledy-gook if you don't know the lingo — but the author is not just using clever jargon, but using appropriate jargon to describe technology that is critical to the plot.
Much of the tech stuff is just window dressing, and isn't essential in and of itself, anymore than an operating room scene in a hospital drama crucially relies on appropriate use of the sight and sound of an electrocardiograph. For example, early in Daemon a hacker uses a carefully crafted picture on a website to break into target computer system. As a plot device, this is primarily to advance that character's power over others by hacking into their computers. To a mildly technical reader, it seems outlandish: is it really true that nothing more than viewing a specific picture could be a security flaw? And to nerds that know something about how computer security works, it is somewhat chilling: yes, this isn't just plausible, but factual. In fact, the possibility of a so-called poisoned JPEG attack was discovered back in 2004 — and dealt with, of course, but only if the correct software patches are applied. And everyone in tech knows that plenty of corporations (and even more individuals) don't pay enough attention to keeping up-to-date with their security patches. So, as the tech magazine Wiredpoints out, Daniel Suarez gets very serious "geek cred".
For science fiction writers, this is nothing new: authors that actually understand and use real-world science have a special place in the hearts of their fans. Historically, "hard scifi" tends towards physics, but as our world has become more technical a wider range of specialties has been woven into speculative fiction. Cognitive science and neurology, sociology, economics — the "CSI" phenomenon wouldn't exist without the allure of yet another high-tech specialty, forensic analysis. Suarez's work could rightfully be considered cyberpunk (especially in the final chapters), but it actually has fewer fantastic elements than many non-sci-fi technology thrillers — in most respects, it deals with the technology that exists in today's marketplace. Even the extreme "cyberpunk" aspects are rooted in R&D products that tech-heads see often in their blog readings.
But some enthusiastic reviewers have fallen into the same mistake that Suarez seems to have made: the AI he portrays in his book isn't a simple collection of bots, or even weak AI: it is making decisions that would require the presence of a strong-AI consciousness.
Clever and malicious bots hidden within the 'Net could undoubtedly wreak havoc, but by the end of the novel a vast number of individuals are doing complex and creative work on behalf of the Daemon. Make as many Dilbert jokes as you want, but no enterprise would succeed as well as this one does without management that understands these people and teams: who has the skills for each project, how to handle the inevitable hiccups, what to do with the incompetents ones, when to promote the ambitious ones. We are given to believe that the mad genius has somehow written "bots" that can do all this, and do it with sublime efficiency.
Many decades of AI research has provided one very surprising conclusion: the stuff that humans consider tough is often easy for a computer, while the stuff we find easy is incredibly difficult. Example: our brains' visual processing is the result of many millions of years of evolution in which our ancestor critters died if they didn't perceive that predator scarcely visible in the shadows of falling autumn leaves. Software vision systems have barely begun to tackle the problem, and only barely function in domains where the scenes are pretty simple. The AI-driven automobile challenge sponsored by DARPA (see wikipedia) is really pushing the envelope, for example. But Suarez's Daemon soon manages to get "robot motorcycles with whirling blades" (cf.) speeding down crowded city streets and swarming like sharks around their victim. How did it all get so easy for a simple software bot?
Suarez's second error is in eliminating the vast unpredictability of how events transpire. Even his archetypal multi-billionaire mad genius wouldn't have been able to map out and deal with the huge number of variables involved in this effort. The genius has even apparently predicted how events will transpire after his own death with such incredible accuracy that he can record his half of conversations that his avatar will be having many months later. The only way Suarez's fiction could work here would be if humans were just as predictable and limited in their reactions as the bots.
To reiterate: this is a fun and exciting novel. If you can tolerate or even enjoy the elevated high-tech aspect, and you like thrillers, then you'll probably have a great time here.
My complaint is in response to treating this as a visionary and cautionary tale. Stewart Brand's futurist Long Now Foundation invited Daniel Suarez in to give a talk (download the MP3 here or listen to it on iTunes) as someone with something "important" to say, and I think that was just silly.