Matt Richtel, a technology writer for the New York Times, also writes thrilling and provocative science fiction. The Cloud is set in — where else? —S...moreMatt Richtel, a technology writer for the New York Times, also writes thrilling and provocative science fiction. The Cloud is set in — where else? — San Francisco and Silicon Valley in the present day, and follows Nat Idle, an investigative reporter, as he painfully uncovers a story that questions the safety of some emerging technology (any more details than that would qualify as spoilers).
Richtel's strong suit is the relentless energy of the plot and, with caveats, the likeability of his characters. On the other side is the over-likeability of those some characters — far too many of them are super-sized and exaggerated to the point of being superheroes. Probably the weakest element of the story is that Richtel throws in too much: there are so many elements to keep track of that it almost becomes necessary to keep notes, and this burden undoubtedly is enough to turn off some readers. The abundance left a few aspects and some characters half-baked. Richtel either needs a longer, more carefully paced book, or he needs to exercise a bit more discipline and get rid of some weeds.
The ultimate answer found in the reporter's quest won't surprise anyone that closely follows criticism of technology, although the danger is elevated here for dramatic emphasis. The only other place where current technology steps over the line into fiction is holography, which has been teasing technophiles for decades now.
The Cloud is a quick read and a quite enjoyable fast-paced adventure. Don't expect too much more and you'll enjoy it. (less)
Disappointing. First, since I’ve read so many books one related topics, much of what Chorost spends time explaining I’ve already long since learned, s...moreDisappointing. First, since I’ve read so many books one related topics, much of what Chorost spends time explaining I’ve already long since learned, so the book felt slower and less intriguing that it probably would for other folks.
But the second reason — and why it barely gets those three stars — is that the author ends up with an almost Pollyanna-ish view of the prospects of integrating the Internet into the human mind.
He pays lip service to the dangers, but doesn’t really do any significant examination of what those threats might be like. For example, he notes that VR pioneer Jaron Lanier warns of “cybernetic totalism” in his You Are Not a Gadget, but dismisses that on the grounds that “the Internet is separate from the human body,” and that a direct connection can “enhance empathy and the direct recognition of another person’s uniqueness.” Uh, well, sure — that’s possible. But isn’t it also quite possible that some folks will get an even more visceral thrill out of bullying or attacking someone with that direct connection?
The problem with the predictions and suggestions in this book are that they universally imagine a pleasant outcome, and then proceed as if that outcome were more than just plausible, but likely, or even guaranteed.
Part of this seems to be due to the author’s clumsy reliance on metaphorical thinking. When imagining how wondrous it will be when human can actually share thoughts, he pauses and notes that granting the thoughts of others access to your own brain is a bit problematic, considering how similar that comes to schizophrenia:
It raises the possibility that even if [a World Wide Mind] could be created, it would present a threat to users’ sanity. However, I think the risk of schizophrenia is not as substantial as it might appear. As I explained earlier, input from others would probably feel distinctly different from one’s own self-motivated brain activity by virtue of its lesser intensity and relative incompleteness. It would no more fool the user than a photo fools the viewer into thinking he is seeing the actual scene.
There is no real reason to believe that the reality of a photo is or is not a reliable predictor of inserted thoughts and emotions beyond its superficial similarity, but that’s as far as he goes with respect to that problem. You could easily expand the “photo” analogy to bring in trompe l’oeil, for example, if you really want to examine the analogy.
But even beyond that, the consumer entertainment industry would undoubtedly be striving mightily to make those impressions “more real than reality,” wouldn’t it? Once those techniques were known, who is to say what malefactors might want to do? I can easily imagine a viral advertisement that sneaks into the brain to make every memory and thought of Disneyland warmer and fuzzier, or changes my taste buds to go positively orgasmic when I suck down a Coca Cola.
These are not the kind of ideas that Michael Chorost has examined in this book. What he has presented is a first peek at that world, and one that is heavily biased towards the positive.
This book was good, but it was either written too early — or perhaps it was written with the wrong perspective.
The basic concept: the author put himse...moreThis book was good, but it was either written too early — or perhaps it was written with the wrong perspective.
The basic concept: the author put himself through as many of the next generation medical tests as he could, in three primary areas: genetics, toxicology, and neurology. Some of these tests are available to the average patient/consumer under limited circumstances, but the majority are out of reach. This might simply be due to cost, but others are still so experimental the implications of their results aren’t even well understood by the scientists, much less doctors and patients.
In theory, what made the book more than just a litany of tests was the personal impact on a human: the author. He worked hard to make us understand when he feared the results, when the test itself was onerous, how he felt when taking a test that might tell him bad news without recourse to treatment. Sometimes that worked, but more often his experiences as “the experimental man” were too distant and abstract. He was and remains, after all, a fundamentally healthy middle-aged man.
The best part of the book was the description of the various tests and the growing realization of how much things are changing. In the next decade or so, these tests will reveal aspects of what is going on inside us that would have been inconceivable just a decade or so ago. How are these very expensive tests going to be made available? Some are already on the consumer market, others require a doctor’s request. But what if the testing companies become like the drug companies and encourage us to push and shove our doctors into requesting tests we might not need? What will this do to already critical health care costs?
The book’s other strong point was when the tests the author took shed light on his brother’s health problems, or on his daughter’s future health. This allowed him to dip his toe into the dilemma of knowledge without power. Some tests partially explained what was ailing his brother, but provided absolutely no promise of help, much less health. Other tests hinted his daughter might face serious problems in the future—but was this knowledge a boon or a burden?
Unfortunately, most of the rest of the book ended up a litany of exams taken for no real reason by a healthy person. Perhaps it was written too early: in a few more years when these tests are closer to having a real impact on a large number of people this would have been a more interesting and informative litany. Or perhaps the perspective was wrong: he could have found other cases similar to his brother’s, involving people with real problems which these tests might soon be able to help with—or at least better illuminate. There would have been much more drama, although perhaps also more heartbreak.
For anyone interested in what kind of medical science we’re heading for, this is still a worthwhile book, despite its limitations.
P.S., for amusement only: I took one of the online cognitive tests pointed to in the book (via www.experimentalman.com) entitled “What’s the Age of Your Brain?” and received the pleasant if somewhat startling result that the brain in my fifty-year-old body is a mere 18 years old.
Executive Summary: don't bother; the Beginner's Guide to the Singularity still needs to be written. (But see "Bonus Points" at end of review for an in...moreExecutive Summary: don't bother; the Beginner's Guide to the Singularity still needs to be written. (But see "Bonus Points" at end of review for an interesting link.)
I was looking forward to liking this book: the title is an obvious reference to the tech singularity, and a good introduction to the subject would have been a useful book.
But this ain't it. First, Dooling spends far too much effort being clever. Now, I don't mind clever: if the author stays on topic, it can be a delightful addition to the right book. For example, Mary Roach does an excellent job of combining a smart-but-goofy sense of humor with her scientific subject matter (although there are definitely folks that don't like her style, either). But Dooling doesn't just toss in cute allusions or snarky footnotes, but entire paragraphs or subchapters wander off topic.
Second, Dooling couldn't decide who his audience is. Someone technical enough to understand all those in-jokes and off-topic nonsense will be bored to tears with explanations of why one should do backups, and will probably be scornful of his assertion that everyday folks need to learn programming languages. (One of the biggest goals of software design is ease-of-use: explicitly trying to get computers to compensate for human limits. But Dooling wants everyone to learn to program because a computer of the future, uh, "will have a sentimental fondness for its mother tongue." Astonishingly errant nonsense.)
Many of those same clever jokes are going to leave the average non-technical reader confused, or worse: distracted. Translating an Emily Dickinson poem into the programming language Python was vaguely amusing, but it only held my attention because I'm enough of a programmer that I tried to actually decipher how Python compares with the many programming languages I know. For the average reader: bewildering waste of time.
Third, he couldn't quite decide what the book was about. Is it about the singularity? Well, some chapters more-or-less stick to that subject. But why is that intermixed with his fondness for Unix and command line interpreters, or his biases towards text editors over word processors? Or the book-ending digression into something about religion, cognition, evolution and flying spaghetti monsters?
Chapter Ten is titled "Be Prepared!" and attempts, clumsily, to tell us how to get ready for the time when technology will change everything, even if it isn't as apocalyptic as Kurzweil's vision of the singularity. It isn't too well thought out (this is the chapter that, among other things, tells folks to learn to program), but I suspect a fuzzy notion of such preparation is how he was able to convince himself that discussions of Open Source software and Post-Rapture Religion would be useful. They aren't.
There were definitely good points in the book. He clearly did quite a bit of research, so there are quotes galore to lead the interested reader to further study. And he tosses in a silly story about how Dad and Son, needing to keep a play date with their World of Warcraft buddies, have to deceive and manipulate Mom who simply doesn't get it. Fun, but not actually useful.
The only portion of the book that I really enjoyed was the reminder that Bill Joy ("The Other Bill") wrote a cautionary article on the future for Wired Magazine back in April 2000 (see the Technology concerns subheading in Joy's Wikipedia page, or the article's Wikipedia page, or the article itself). Many foolishly focused on Joy's depiction of runaway nanotechnology (the "grey goo scenario"), but I was more impressed by his nightmares over "KMD": knowledge-enabled mass destruction. Global destruction by out-of-control Von Neumann machine is quite unlikely, but the inexorably descending barriers to some destructive technologies (such as genetic engineering -- the "knowledge") will enable future terrorism far worse than we've ever seen. Dooling also reminds us that Theodore Kaczynski -- the Unabomber -- wrote scathingly and brilliantly on the technological future. (I have always resented Bill Joy because I was forced to learn and use Sendmail, but I have since learned that he isn't responsible for that atrocity, so I guess now I only resent him because he's a tech millionaire.)
But even that chapter ends poorly when Dooling compares the dark side of tech to research and development of atomic weapons, and proceeds to ham-fistedly distort the era's complex social history as well as the motivations of the scientists. Grossly oversimplifying such a fraught time to provide a poorly thought-out lesson and a bit of trivial entertainment was very distasteful.
OK, bonus points for providing this link to Paul Boutin's blog essay "Biowar for Dummies". Definitely worth reading. (less)
If this book had just arrived as a scifi thriller, I might have given it five stars. But it has been hailed as a novel of revolutionary vision, and I...moreIf this book had just arrived as a scifi thriller, I might have given it five stars. But it has been hailed as a novel of revolutionary vision, and I think that's mostly bogus. But don't get me wrong: this is an exciting book and an excellent first novel. Suarez tells his story with an insider's understanding of modern computer technology which makes it a special delight for folks with a similar background. And the basic idea of software bots that activate on cue and interfere with society is brilliant and scary — and more realistically scary than the techno thrillers of Tom Clancy, Dan Brown or the especially clueless Matthew Reilly.
Here's the executive summary: (no more of a spoiler than the blurbs)
A multimillionaire genius computer program (read: mad scientist-type) dies and strange events start hitting the headlines. It seems he was a bit angry and a bit crazy, and had planted a very complex distributed AI system in computers throughout the internet. These watch for his obituary in news feeds and then start wreaking global havoc.
Non-geeks will probably find the high tech-quotient a burden to deal with — it does sound like so much gobbledy-gook if you don't know the lingo — but the author is not just using clever jargon, but using appropriate jargon to describe technology that is critical to the plot.
Much of the tech stuff is just window dressing, and isn't essential in and of itself, anymore than an operating room scene in a hospital drama crucially relies on appropriate use of the sight and sound of an electrocardiograph. For example, early in Daemon a hacker uses a carefully crafted picture on a website to break into target computer system. As a plot device, this is primarily to advance that character's power over others by hacking into their computers. To a mildly technical reader, it seems outlandish: is it really true that nothing more than viewing a specific picture could be a security flaw? And to nerds that know something about how computer security works, it is somewhat chilling: yes, this isn't just plausible, but factual. In fact, the possibility of a so-called poisoned JPEG attack was discovered back in 2004 — and dealt with, of course, but only if the correct software patches are applied. And everyone in tech knows that plenty of corporations (and even more individuals) don't pay enough attention to keeping up-to-date with their security patches. So, as the tech magazine Wiredpoints out, Daniel Suarez gets very serious "geek cred".
For science fiction writers, this is nothing new: authors that actually understand and use real-world science have a special place in the hearts of their fans. Historically, "hard scifi" tends towards physics, but as our world has become more technical a wider range of specialties has been woven into speculative fiction. Cognitive science and neurology, sociology, economics — the "CSI" phenomenon wouldn't exist without the allure of yet another high-tech specialty, forensic analysis. Suarez's work could rightfully be considered cyberpunk (especially in the final chapters), but it actually has fewer fantastic elements than many non-sci-fi technology thrillers — in most respects, it deals with the technology that exists in today's marketplace. Even the extreme "cyberpunk" aspects are rooted in R&D products that tech-heads see often in their blog readings.
But some enthusiastic reviewers have fallen into the same mistake that Suarez seems to have made: the AI he portrays in his book isn't a simple collection of bots, or even weak AI: it is making decisions that would require the presence of a strong-AI consciousness.
Clever and malicious bots hidden within the 'Net could undoubtedly wreak havoc, but by the end of the novel a vast number of individuals are doing complex and creative work on behalf of the Daemon. Make as many Dilbert jokes as you want, but no enterprise would succeed as well as this one does without management that understands these people and teams: who has the skills for each project, how to handle the inevitable hiccups, what to do with the incompetents ones, when to promote the ambitious ones. We are given to believe that the mad genius has somehow written "bots" that can do all this, and do it with sublime efficiency.
Many decades of AI research has provided one very surprising conclusion: the stuff that humans consider tough is often easy for a computer, while the stuff we find easy is incredibly difficult. Example: our brains' visual processing is the result of many millions of years of evolution in which our ancestor critters died if they didn't perceive that predator scarcely visible in the shadows of falling autumn leaves. Software vision systems have barely begun to tackle the problem, and only barely function in domains where the scenes are pretty simple. The AI-driven automobile challenge sponsored by DARPA (see wikipedia) is really pushing the envelope, for example. But Suarez's Daemon soon manages to get "robot motorcycles with whirling blades" (cf.) speeding down crowded city streets and swarming like sharks around their victim. How did it all get so easy for a simple software bot?
Suarez's second error is in eliminating the vast unpredictability of how events transpire. Even his archetypal multi-billionaire mad genius wouldn't have been able to map out and deal with the huge number of variables involved in this effort. The genius has even apparently predicted how events will transpire after his own death with such incredible accuracy that he can record his half of conversations that his avatar will be having many months later. The only way Suarez's fiction could work here would be if humans were just as predictable and limited in their reactions as the bots.
To reiterate: this is a fun and exciting novel. If you can tolerate or even enjoy the elevated high-tech aspect, and you like thrillers, then you'll probably have a great time here.
My complaint is in response to treating this as a visionary and cautionary tale. Stewart Brand's futurist Long Now Foundation invited Daniel Suarez in to give a talk (download the MP3 here or listen to it on iTunes) as someone with something "important" to say, and I think that was just silly.
Excellent book. Not a convincing argument, but a very refreshing and provocative contrarian perspective.
Johnson provides evide...moreSept 2010 update below.
Excellent book. Not a convincing argument, but a very refreshing and provocative contrarian perspective.
Johnson provides evidence that much of our mass entertainment, even the stuff we often shudder at, is gradually pushing the IQs of its consumers steadily up. He focuses our attention on aspects of television -- including reality TV!, video games, and much else in this effort.
Two things are crucial to note, though.
First, Johnson’s title and subtitle (”How Today’s Popular Culture is Actually Making Us Smarter”) are deeply ambiguous, since ”Smarter” and ”Good for You” are extremely subjective concepts. He really shouldn’t have used such loaded terms, and doesn’t go anywhere near far enough to explain and narrow his objective. His text only does an excellent job at arguing that many aspects of modern culture are making humans better at solving certain kinds of puzzles, and better at thinking about complex situations.
He provides fairly persuasive evidence that consumers of mass media can now understand and enjoy entertainment that would be bewilderingly too complex for the masses just a few decades ago. He provides broad evidence for this, but most convincingly in television and video games. The reason behind this is quite astonishing: in order to keep voracious audiences coming back for more, producers have to ”keep it fresh”, adding something interesting and new to the mix with each iteration. One way of doing that is to tease the brain with subtlety and complexity, and thus we are in effect trained over the decades in understanding and even enjoying this complexity. [The quest for novelty has also long been seen as a reason for the inexorable spread of sexuality and violence in media, although I don’t recall Johnson exploring this sidebar.:]
But Johnson doesn’t really argue that this makes anyone more moral, or happier, or that it makes society better, or even that a complex show is in any other sense qualitatively better than a simpler show.
This is related to the second point: Johnson’s argument should be taken as descriptive, not prescriptive. This is something that many readers seem to have problems with: many folks automatically assume that anything an author spends a great deal of time and effort elaborating is something that author must approve of. But often -- and I believe this book is a good example -- the intent instead is to explore a fascinating topic and to illuminate it to a broader audience for pondering.
Johnson doesn’t do a very good job at explaining this, which is a shame. We spend so much time agreeing with ourselves that mass entertainment is corrosive that the contrarian point of view becomes almost shameful. And even after reading this book, it is easy to still conclude that popular media is destructive, but due to the morality of its content. It is possible that McLuhan was wrong, or at least that the story is more complicated than we’d previously believed.
And frankly, that’s good. Even though I haven’t watched more than an hour or so of television per year for about a decade now, I do appreciate the increasing complexity of stories offered up in Western culture. And the story of technology’s impact on humanity is, itself, a tale that becomes more delightfully engrossing as it becomes more curious and twisted.