At the superficial level, this is a very enjoyable story of "Two Society Girls in the West" —specifically, two restless twenty-something women bored wAt the superficial level, this is a very enjoyable story of "Two Society Girls in the West" — specifically, two restless twenty-something women bored with the idea of the future that is expected of them, and drifting through mild adventures (and flirting with dreaded spinsterhood) until this quite astonishing opportunity arises: be schoolteachers (sans any training) at the frontier deep in the Rocky Mountains.
It isn't really the frontier — this was more than twenty years after 1893, when the U.S. Census Bureau declared that the frontier had been closed ➚. But this was a community far enough off the beaten path that few services were available, and so it feels pretty close to the era Laura Ingalls, even though the nearest train depot, and it's connections to the rest of the world, are less than a day away.
The author is the Executive Editor of the New Yorker, and writes wonderfully. True, she writes in the labyrinthian style of the New Yorker's long-form journalism, with its seemingly endless recursive digressions. If you really want a linear narrative, with a constant view of the destination always in sight, then this book (and the New Yorker) probably isn't for you. If you think side trips into subsidiary topics are fine, as long as they are entertaining and at least tangentially relevant to the story, then you'll enjoy the ride.
Since our heroines are thrown into the job of teaching, folks in that profession will get an extra kick out of this, sympathizing and identifying with their crises and thrills.
But that isn't all there is to this. I'm a little embarrassed for Dorothy Wickenden, since she doesn't appear to realize that she's written a book that reinforces a mythos of America that is untrue as well as ideologically problematic.
I was forcefully reminded of this when I happened to read the New Yorker essay (yes, the New Yorker again), Out of Bethlehem: The radicalization of Joan Didion. The second half of that essay relates how Joan Didion became increasingly aware of they mythology of the American Self.
This is the legend of the pioneers in covered wagons who trekked across the Rockies and settled the state, the men and women who made the desert bloom—Didion’s ancestors. It’s a story about independence, self-reliance, and loyalty to the group. Growing up, Didion had been taught that for the generations that followed the challenge was to keep those virtues alive.
The fly in that balm is that California’s settlement had been heavily subsidized by the U.S. Government, which in this respect is the agent of commerce. Does that sound cynical? Are you aware that Adam Smith’s “Wealth of Nations” was published the same year as the Declaration of Independence, and that the United States republic suckled the ethos of capitalism from the same teat it acquired an obsession with liberty?
The story in this book is more intimate than the grand scale of California, but it is similar. The Arcadian locale of the western slope of the Colorado mountains was inaccessible to development until the U.S. government granted the wishes of those who would become the railroad barons. Yes, it was beneficial to the country, but some had power, and received outsized benefits.
From the New Yorker essay:
Everyone else was a pawn in the game, living in a fantasy of hardy individualism and cheering on economic growth that benefitted only a few. Social stability was a mirage. It lasted only as long as the going was good for business.
This is the way the story ends in Elkhead, Colorado, too. Once the coal turned out to be inadequate to sustain the interest of the capitalists, the place returned to the wilderness it had originally been. The intrepid homesteaders weren’t adequate to keep the community alive without that lifeline.
There is a second, lesser meta-narrative as well. The two women represent a class that no longer exists. When I was growing up, there existed a group of people that later became known as the Rockefeller Republicans. Wikipedia defines the term a bit differently than I remember it, so I’ll switch to “benevolent plutocrats”. This was the paternalistic class that saw it as part of their duty — a duty that came with privilege — to try to make the world a better place for those with less. They were often insufferably arrogant, and easily strayed into social Darwinism, but it was that sense of responsibility that those two young women felt when they set off to be schoolteachers. Read the tale, and it is clear they weren’t condescending elitists, but warm and caring people who worked to achieve the idealism that was rooted in a kind of noblesse oblige.
Those people appear to be gone. Why? What changed in American culture that gave the wealthy permission to cease caring in this singular way?
Nothing Daunted serves as a reminder at how seductive the mythologies of the United States are. The idea of that a person with stalwart discipline can pull themselves up by their bootstraps and become a “self-made man” is embedded deeply in the fantasy that prevents the United States from facing up to the complex creature that it has become. And along with that, it is also an enjoyable tale of youthful adventure....more
If you’re curious about how scientists actually study climate change, David Archer is an excellent go-to guy. Every year brings new developments, so a book isn’t the best resource for up-to-date understanding of all of the details of what is known about what is happening, but a book is a good way of learning how the science gets done.
This book, The Long Thaw: How Humans Are Changing the Next 100,000 Years of Earth's Climate instead delves into climate science across the vast spans of geological time. In this short book, you’ll learn that what our generations do (or, more precisely, don’t do) will change the climate for hundreds of thousands of years.
The science really is the point, though. We can’t predict what humanity will be going through in another hundred years, much less after thousands or tens of thousands, so the presentation of all of these details is to enlarge our understanding of the magnitude of what we are doing, in the hope that we’ll change our ways as soon as possible.
Although this is a book about science, you won’t be tasked with dealing with the nitty-gritty. There isn’t a single equation, for example, and the endnotes are to guide further reading, so even those aren’t laden with impenetrable jargon.
In most of the analysis, all the reader has to do is concentrate a bit. There are a lot of phenomena that are examined, since the way science double-checks itself is for people in disparate fields to see if they come to similar conclusions, and “disparate fields” goes from oceanography, plant paleobiology, geologic chemistry and many more.
The toughest for most people will probably be in understanding how isotopes are used. I’m pretty science savvy, but it was still engrossing to learn how isotopes are central to some of these analyses.
I’ll try to boil down my favorite narrative:
• An isotope is an atom — an elementary particle — the contains a different number of neutrons than the standard variation of the element. Since neutrons aren’t charged, this affects the element’s chemistry only in subtle ways.
• One key isotope is that of oxygen, because it can start off in H₂O, and have subsequent effects seen in CaCO₃ and CO₂. “Normal” oxygen has eight neutrons, and since it also has eight protons, it is known as O-16. The heavier O-17 (one extra neutron) and O-18 (two extra neutrons) isotopes are also stable.
• The extra neutron(s) also makes any water containing the heavier isotope heavier, which has the critical result that it evaporates with a little more difficulty than “normal” water (see the Wikipedia article on “kinetic fractionation”).
• This is crucial: because it evaporates less, seaborne clouds will have fewer of the heavier isotopes, while the remaining seawater has relatively more.
• Since the precipitation that falls on the land comes from these clouds, it is isotopically lighter. Which means snow is, too, and so are glaciers, and those huge ice packs during ice ages.
• The amount of the planet’s water that is stuck in ice form on land is therefore directly correlated with the varying ratio of oxygen isotopes left in the ocean. Woo-hoo!
• That (slightly isotopically heavier) oxygen is taken up by the billions and billions of microscopic sea creatures (the Foraminifera) that create shells, commonly out of CaCO₃.
• As those microorganisms die, their shells remain and accumulate. When the layer of sediment they accumulate in is compressed into rock over geologic time, we end up with limestone, such as the stuff the pyramids in Egypt are made out of. But for our purposes —
• When scientist dig up core samples of the sediments deep in the ocean, they can analyze the variation in the isotopic ratio of oxygen, and thus determine the varying amount of ice that was present elsewhere, on land.
This is one of the multiple ways that we can gaze back into the distant past and determine what the climate used to be like. The results of different methods can be compared to make sure they are being used properly. For example, ice cores have been cut out of the antarctic which go back 800,000 years (there are places where ice may have been accumulating for 1.5 million years). The amount of carbon dioxide in the air bubbles found embedded in those cores is cross-checked with other factors, including various isotopic measurement.
Progress continues: fairly recently, ancient graffiti in a Chinese cave recorded the impact of droughts more than 500 years ago, including dates which correlate with Chinese historical records. That same cave provided minerals, which steadily accumulate in stalactites or stalagmites, etc., which show a change in oxygen isotopes consistent with other climate models.
Isotopic analysis is only one of the many “proxies” that are used to gauge climate history. Tree rings have been dated back almost 14,000 years; evolutionary changes in the pores on leaves (stomata), which relate to the concentration of carbon dioxide and humidity, are examined in the fossil record.
Scientists compete to build computerized models of the climate which incorporate those factors they’re guessing are most important, and which work over different time and geographic scales. A model which uses one set of data from history and is able to accurately predict what changes were taking place in another area is doing well. When two models which use different input datasets yield predictions that are consistent with one another, that’s also a good sign.
But Archer notes many times in the book that models still can’t account for nearly enough for us know what is happening to our satisfaction. It is important to point out that this doesn’t mean that climate change isn’t happening — the question is how fast, and how bad, and what changes will take place where. For example, the current drought in California is still largely believed to be just a normal variation in weather, similar to other droughts in memory. But climatologists are increasingly worried that the “drought” isn’t weather, but climate, and is an early sign of California’s “new normal”.
This isn’t the correct book for information about recent discoveries in climate modeling — books aren’t the right medium for that. But if your reading diet (or podcast listening!) includes enough science, you’ll spot the steady accumulation of data. For example, “a newly discovered strain of bacteria found in Arctic permafrost harvests methane from the air — meaning it could help mitigate the effects of warming” is good news I learned from Scientific American here, but “tree growth lags below normal for several years following droughts, a detail about carbon sequestration that climate models currently overlook”, from here, which is bad both for the climate and for California’s forests in the current drought.
Even readers that are barely aware of what an isotope is will probably be able to keep up. This is especially true since a quick trip to Wikipedia or a Google query can help you brush up on the toughest stuff, although I found most of my complimentary online research was driven by voracious curiosity.
At one point I wanted to remind myself of the details of the surprising Larsen B Ice Shelf collapse in 2002 (which had been stable for maybe 12000 years). That then led me to examine the current status of the Ross Ice Shelf, and then the West Antarctic Ice Sheet, at which point I found myself looking up the differences between “ice shelfs” and “ice sheets”, for example, and then back to the Filchner-Ronne Ice Shelf, and then finally to the East Antarctic Ice Sheet.
Personally, I think the relatively near-term climate effects of agriculture will be so devastating that it might cause an economic collapse leading to the collapse of our global civilization. If it doesn’t, then maybe this reminder of what the longer-term threat will be (p. 138):
Yeah, the amount of carbon dioxide and methane we’re pumping into our atmosphere could easily mean that eventually the sea levels will be fifty meters higher. It’ll [probably] take a long, long time for all that ice to melt; hundreds of years, or maybe over a thousand. Still: this is what we’re doing to our home.
Curiously, while our addition of large amounts of carbon will be catastrophic for many species on our planet, and seriously detrimental to future humans’ ability to thrive on a biologically impoverished planet, it might stave off the return of an ice age, which normally would start closing in after another few tens of thousands of years. Given that our planet happily functions in both ice ages and ice-free ages, that probably doesn’t matter except to us.
If you understood this stuff the to the same depth as the scientists, you wouldn’t need to read books like this. The point is to read enough that you are comfortable with how the science works, that there aren’t glaring omissions, and build your faith that the scientific enterprise actually does provide reliable guidance for us when we try to solve difficult policy questions. It’s also just a drop-dead fascinating lesson....more
At some point I heard that Cory Doctorow's short story, The Man Who Sold the Moon had won the Theodore Sturgeon Memorial Award, a pretty significant pAt some point I heard that Cory Doctorow's short story, The Man Who Sold the Moon had won the Theodore Sturgeon Memorial Award, a pretty significant prize. What I don't remember is why I thought that meant it was worth tracking down (I don't make a point of hunting down most award-winning fiction), but I'm glad I did.
Of the four stories which I actually read within this fat tome, it was the one that made it worthwhile.
Now, I wanna say: the reason I'm abandoning this book is simply lack of time. Many of the other short stories might be quite worthwhile, so I don't want to dissuade anyone else from reading the collection.
But just in case you only want to read Doctorow's story, he's a bit peculiar in that he makes it available for free on his website, boingboing. Read it here; it's very good. Curiously, that's also the name of a book by old-school scifi author Robert Heinlein in which he expounds on his libertarian politics (it isn't particularly good story). Any connection other than the name escapes me, although I probably read Heinlein's story only once, three decades or more ago.
The rest of this is what I started when I expected to read the whole book. It's mildly critical of the preface and first story, both by Neal Stephenson, questioning whether the whole book was going to be like his pieces. Good news: apparently not.
This is a collection of “stories and visions for a better future”, so as I make my way through it, I expect to be updating this.
But to begin:
The preface and the first story are written by Neal Stephenson, a white American male just a few months younger than I am. Reading both of those pieces left me somewhat disappointed with him, frankly.
First, the preface, titled “Innovation Starvation”. Stephenson relates how he feels let down that the United States no longer appears to be the creative engine of thrilling new technologies that he fondly recalls from his youth. The now cliched narrative arc from NASA’s Gemini missions and moon landing to the retirement of the Space Shuttle is emblematic. What galvanized him into engaging with this was the oil spill of the Deepwater Horizon in 2010 — the people of the United States had been told almost forty years before, in the first oil crisis, that petroleum was politically problematic, yet we’d done very little about it (other than to fight wars and subsidized nations in the middle east).
The goal of the book is to provide conceptual templates to future innovators, the same way the writers of the Golden Age of science fiction had mesmerized and energized the generation of scientists and engineers behind NASA.
The story he writes, Atmosphæ Incognita, is about the engineering of a twenty-kilometer tall building. It is a good story, similar to Ron Howard’s Apollo 13 in its focus on the technology. It felt like something written in the 1950s, though (well before the actual mission of Apollo 13 in 1970). The first-person narrator is a lesbian, true, but that doesn’t really seem to matter. In one way, that’s great. Letting people just be themselves is quite post-modern. But that also means that the only element that hinted at being interesting was set aside, and so the entire story ends up being rather bland. Yeah, the technology is interesting, and the failure of some of the technology lends some interest, but no enticing drama.
Which brings me to why I’m mildly disappointed in Stephenson. I thought he would be clever enough to understand that technology isn’t going to save the United States, and that we can’t invent our way out of our malaise. Well, yeah, sure: some fascinating new toys might distract us from the adult problems we’re confronting, and might even boost the economy enough to mitigate some of them, but that isn’t much.
The problems we’re facing are cultural and sociological, and don’t have simple solutions — we really don’t know whether they have solution at all (if you think you know of a solution, then you just need to take a step backwards and recognize that you didn’t see that it is entangled within an even larger problem).
I’ll have to see whether the other stories largely rest on similar false illusions....more
This is a fun homage to Shakespeare. The fool from Lear is the titular hero of the story, which is based loosely on Lear, with MacBeth's witches throwThis is a fun homage to Shakespeare. The fool from Lear is the titular hero of the story, which is based loosely on Lear, with MacBeth's witches thrown in to provide a different narrative thrust and a few elements of deus ex machina.
Warning: plenty profane. I suspect that if Shakespeare were writing today, he'd be totally on board, though (although he'd probably be working in the medium of cable TV).
It can't get five stars, because there's no iambic pentameter, and it doesn't get four stars, because the author makes things a little too convenient for himself at times — but, as I said, it's fun; don't expect anything profound....more
It was a kick. Predictably reminiscent of early Tom Clancy, before he corrupted his technowar thrillers with his naive variation of libertarian politics.
I especially enjoyed how the North Shore Mujahadeen subverted the traditional role the U.S. plays in a conflict, and the exploration of the morality of Dirty Hands in guerilla strategy.
There was a little too much U.S.A.-rah-rah, however, with quite a bit of obvious cultural stereotypes.
Oh, and few spoilers: (view spoiler)[A key vulnerability that cripples the U.S. at the beginning is that the microchips sourced from low-bidders came from China, who had compromised the designs. The key phrase was “Each antenna was microscopic, hidden inside a one-millimeter square and activated only by a specific frequency of an incoming missile.”
I'm not an expert, but I do know technology relatively well.
First, circuits looks like cityscapes from a few thousand feet up, and a one-millimeter square would be about as obvious as a football stadium surrounded by parking lots. Security agencies have been studying aerial photographs since forever (you might recall that U-2 aerial photography revealed the distinctive pattern of Soviet missile installations).
I know that there are companies that specialize in back-engineering chips (a college friend worked at one) that shave off the plastic around the silicon chip until they can get images of the circuitry. It seems pretty damn obvious that the U.S. military would use these two very reliable abilities to inspect a representative sample of the chips going into weapon systems.
Second, even if the antennas got into the chip, a one-millimeter antenna is going to be pretty wimpy. A bluetooth antenna is 6mm across its largest dimension. Something that small will only respond to incredibly high frequencies (I think), which are easy to shield. Sure, an incoming missile could be dumping huge amounts of energy into broadcasting a signal, I guess — but it still seems really fishy.
Third, even if the antennas got into the chip, asserting that the associated firmware could also get onto the chip is implausible. Microprocessors typically don't have software on-chip; they get it from RAM and ROM elsewhere in the system. There would have to be dedicated circuitry listening to that antenna, doing signal processing, detecting when a valid signal had been received, and then subverting the rest of the system's behavior — all without ever doing real field-testing. It might not seem like much, but I'm pretty sure the idea is laughable.
Similarly, there's a Security-Badge RFID hack that disrupts the U.S. military offices at the beginning. RFIDs chips are absurdly simple: they use an antenna to receive power, which provides enough energy to do some fairly minimal processing, and then broadcasts a signal at a much, much lower power level.
But here, the RFID chips are sophisticated enough to be doing wardriving, looking for weak wifi signals once inside the building, and then sustaining a connection long enough to upload pretty sophisticated hostile software. Uh, no: the kinds of electronics detection equipment used would never be fooled that something that complex is a security badge, no matter how much it tries to look like one. And since it's going to need a moderately power battery onboard (the power received by an upstream RFID query isn't going to be anywhere near enough) it's going to be very, very obvious. (hide spoiler)]
All in all, a great technowar thriller. If you like that kinda stuff, read it.["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>...more
I wish I liked it more. The style of the story was passive in a way that felt quite alien. An artifact of the translation, or of something quintessentI wish I liked it more. The style of the story was passive in a way that felt quite alien. An artifact of the translation, or of something quintessential about Chinese science fiction? The book mixed its science up nicely, with deeply realistic portrayals of actual science mixed in with astonishing leaps into fictional science. Certainly two of the most intriguing weapons I've ever read about were brought to bear....more
Almost all “science fiction” books have at least one element that is critical to the story which is nevertheless fantastiSpoiler addendum added below.
Almost all “science fiction” books have at least one element that is critical to the story which is nevertheless fantastical. The faster-than-light travel and transporters in Star Trek, for example, or the Force (and FTL, and light sabers, etc.) in Star Wars. The sub genre in which this is minimized is “hard science fiction”. Generally, that’s okay. For those who appreciate thoughtful speculative fiction, the greatest affection tends to go to authors who carefully choose one fantastic element and extrapolate a plausible world consistent with that change. There are other authors who specialize in scifi that has a stronger relationship to the thriller genre, too.
Nexus is in a pretty sweet spot on that spectrum. The big fantastic element is the heavy use of nanotechnology, although that stuff is so cool that it is understandably the go-to solution for techno-magic. Anyone familiar with Star Trek TOS will remember how variations on lasers were magic (phasers, photon torpedoes, tractor beams).
But most of the rest of technology was a plausible extrapolation from today. Oh, there were two glaring exceptions that weren’t included: the effects of climate change and the increasing prevalence of AI & robotics. I mean, there were still humans driving cars in 2040! In the San Francisco Bay Area!
But this is an action-packed thriller, too. Fans of military fiction will probably get a big kick out of this. I also enjoyed the not-absurdly unlikely politics. The U.S. government doesn’t come off too well, but that’s probably quite realistic given America’s current trajectory.
I’d definitely recommend this as a quick and easy scifi snack.
Addendum: (view spoiler)[As I mentioned above, the primary fantasy element in this story is nanotechnology. Ironically, scientific news has just come out that hints at how plausible their projection is likely to be. Researchers have just created what is likely to be either the smallest transistor we’re ever likely to see, or at least approximating its magnitude, at 167 picometres in diameter. It’s just a single phthalocyanine molecule (C₃₂H₁₈N₈) surrounded by 12 indium atoms, placed on an indium arsenide crystal. (See the press coverage here or the academic article here. In the article, the caption of the image showing red blood cells states that “around 7,200 of the new transistors could fit on a single cell”. That’s an interesting size, because the 1974-era Intel 8080 was about 6,000 transistors. And while that isn’t very advanced compared to today (state-of-the-art processors are over one billion transistors), if a vast sufficient number of them could be networked, as the book asserts, then it becomes a tiny bit more plausible that a computer could be squeezed in.
Red blood cells are pretty small compared to some neurons, but not all. Red blood cells run about 6 – 8 µm, while the central soma of a neuron varies from 4 to 100 µm. So a microprocessor of roughly the complexity of an Intel 8080 might be able to hide inside of a big neuron.
That still leaves unsaid where and how it gets energy, how it communicates with other neuronal coprocessors and the outside world, and how it detects what its host neuron is actually doing.
But it is a step forward. (hide spoiler)]["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>...more
Bacigalupi’s The Water Knife is a near-term dystopian hard-boiled noir set in the midst of a social apocalypse created by climate change.
In this worBacigalupi’s The Water Knife is a near-term dystopian hard-boiled noir set in the midst of a social apocalypse created by climate change.
In this world, the sub-national government agencies in charge of water have acquired incredible power as the climate has changed—even some military weaponry—and are engaged in something of an intramural war within the United States.
The book intertwines the stories of two major characters and half a dozen minor ones. The chief protagonist is the antihero Angel Velasquez, which the book’s blurb (no spoilers, here) describes as “detective, leg-breaker, assassin and spy”—i.e., a “water knife”. On the other side of the fight is the Phoenix-based Pulitzer-prize winning journalist Lucy Monroe, who has been hardened and embittered by the chaos and death, writing voyeuristic “collapse porn” and pretending she doesn’t care.
The backdrop is the continuing efforts of the Southern Nevada Water Authority (e.g., water for Las Vegas) to control ever larger portions of the water flowing down the southwest side of the continental divide; chiefly along the Colorado River. The opposition is California (“Calies”, the feared but distant power with the most money) and over-the-river folks in Arizona. When something interesting seems to be happening in Phoenix that has the SNWA’s boss worried, Angel Velasquez is sent to investigate and solve any problems.
Arizona is portrayed as in deep trouble. Texas and New Mexico have already collapsed due to “Big Daddy Drought”, with refugees swamping Arizona, which has largely been losing the water wars and seems to be on a fast downward spiral. But perhaps the mystery that has drawn Angel south will change the game. Before the book is over, there will be a number of betrayals and quite a few folks will be dead.
This is an exciting story, well told and timely. The prophetic nature needs to be taken with a grain of salt, however—even if the climate did change this aggressively nasty, there are a few implausibilities here. But they don’t really detract from the drama, so dive in and read. Just make sure you have plenty of cold drinking water on hand, or you could stir up some nightmares.
Much to my disappointment, there is no suggestion on the ‘webs that this has been optioned as a movie. Dunno why — it’s Chinatown meets Mad Max, but with a lot more explosions and mayhem. Stay tuned?
(view spoiler)[ My biggest objection is with the collapse of Texas, and some of the other places that had died:
From page 70:
Phoenix would fall as surely as New Orleans and Miami had done. Just as Houston and San Antonio and Austin had fallen. Just as Jersey Shore had gone under for the last time.
So it should be obvious that “Big Daddy Drought” isn’t going to touch Jersey Shore, New Orleans or Miami. But it also can’t kill Houston or, really, San Antonio or Austin. All of those cities are either right on the ocean’s edge, or at an elevation and distance that mean desalination plants would ameliorate a water crisis, at least for an urban population. I suppose it is plausible that a rise in the ocean level could wipe out Miami and the Jersey Shore, and maybe even Houston, but oddly, New Orleans is safe for a different reason. Rising seas could decimate the current city, but the economic need for a population center at the mouth of the Mississippi River, where it meets the Gulf of Mexico, is so potent that the city would survive that, although might morph into an unrecognizably ugly industrial place with no hint of it’s beautiful past.
“Big Daddy Drought” could easily wipe out what farming is done in west Texas, but that’s already happening. East Texas, over by Louisiana, is a much wetter place—hot and humid, as any visitor to Houston could tell you. If it is in severe chronic drought, then so is the rest of the midwest, and the problem would be much, much bigger than the one Bacigalupi tells.
It is also somewhat bewildering why refugees from a drought would want to head into the other southwestern states. Sure, they might be dumb themselves, but the book has them being escorted by FEMA and other agencies, who would presumably want to herd them to places where water is no longer a critical need, not where it will only get worse.
My other objection is with the arcologies. The general concept is to create a nearly self-sustaining and ecological mega-architecture. Stuff like this is in an early experimental phase, but targets a much lower degree of sophistication than the book’s narrative requires. The most obviously magical ability Bacigalupi creates is that the exterior of the building provides enough solar energy for the needs of the building, it’s operation and occupants.
The hidden magic is in the water and sewage recycling and atmosphere management. For example, if you follow medical scares, you may have seen Legionnaires' disease mentioned quite often recently. The bacteria that causes this disease (as well as its relatives) love our water systems. Anyone who lives where it is mildly humid can tell you horror stories of mold. Now try to imagine a mega-building that attempts to capture and treat almost all of its wastewater and sewage in the basement, and channel the result back upstairs to the occupants and to “condensation-misted vertical farms, leave with hydroponic greenery” [p. 8]. It’s a fantasy that’s easy to imagine, but entails stupendous complexity. If we started putting massive research into the problem of building those things now, they might be ready in a few decades, but there’s no indication we’re going to get them in the next few decades. (hide spoiler)]["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>...more
This classic of science fiction is a must read —and very fast-paced and easy to read. Asimov took on the challenge: before this book, it was believedThis classic of science fiction is a must read — and very fast-paced and easy to read. Asimov took on the challenge: before this book, it was believed that science fiction couldn't crossover to the detective genre, since science fiction could always, trivially, answer too many questions.
Asimov proved 'em wrong.
I don't remember how many books featured the odd couple detectives (one human, one robot), but it was a pretty good pairing.
I will note that Asimov does contradict himself. At one point, it is established that robots can only follow "the law", but later the robot explained his actions by arguing that there is a "higher law", above the law itself. Oops!...more
• The foregoing doesn’t explicitly link to Bostrom’s book so this might not be right — it spends too much time on Kurzweil’s thesis, so I suspect it’s not the correct one. And the author has drunk the Kurzweil Kool-Aid and it enthusiastically peddling it to others without any critical evaluations. Of course, everything here lies at the intersection of advanced software engineering, AI research, neurology, cognitive science, economics, and maybe even a few other fields, which is why so many very intelligent and highly educated people can talk about it and be fundamentally off track.
Oh, but there’s plenty more, anyway:
The Telegraph UK • I’m bumping this one to the top because it presents both the problem Bostrom is dealing with as well as the difficulties of his text in a more engaging style.
The Economist • Good overview. Doesn’t go far enough into details to make any errors, but a bit deeper than some of the other short reviews.
The Guardian [also discusses A Rough Ride to the Future] • Short and superficial, but good. The Lovelock portion is amusing, calling out his conclusion that manmade climate change isn’t an existential threat, but that while it “could mean a bumpy ride over the next century or two, with billions dead, it is not necessarily the end of the world”. Personally, I agree, but think that the concomitant economic collapse puts the timeframe at many more centuries.
Financial Times • The Guardian article, above, cites that “Bostrom reports that many leading researchers in AI place a 90% probability on the development of human-level machine intelligence by between 2075 and 2090”, whereas this Financial Times article says “About half the world’s AI specialists expect human-level machine intelligence to be achieved by 2040, according to recent surveys, and 90 per cent say it will arrive by 2075.”
That’s quite a difference, although they might be reporting different ends of a confidence range, I suppose. But the second half of the FT quote makes me suspicious the reviewer is tossing in some minor distortion to slightly sensationalize the story (which might be worthwhile). There is somewhat more detail than some of the other reviews, but not much. The style is more evocative of the threat, though.
Reason.com • This one isn’t only a review, since the author also injects a few opinions about what might or might be possible (based, presumably, on his exposure to other arguments as a science writer). In covering more ground, though, the essay makes implicit assumptions which might or might not be in Bostrom’s book.
For example, he says, “Since the new AI will likely have the ability to improve its own algorithms, the explosion to superintelligence could then happen in days, hours, or even seconds.” This is a very common assumption that really needs to be carefully examined, though. If the goal of AGI is to create a being that thinks more-or-less like a human, why would it have any special skill in improving itself? We humans are really very good at that, after all.
I especially like that his essay starts and ends with references to Frank Herbert’s Dune, which (among its other excellences) envisions a human prohibition on machines that think. Something like this appears, to me, to be one of the few ways that leave the human race in existence and in control of its own destiny for the long term, and I even perceive a path to it, although hopefully without quite as much war. The explosion of functional AI (called “narrow” by some) seems likely to devastate human employment in the coming decades, which will hopefully be before any superintelligence as been created as our replacement and/or ruler. It is plausible that our reaction to the first crisis might be something that prevents the second. Good luck, kids!
This essay thankfully has some critical thinking applied to some of the assumptions that appear to be in Bostrom’s book. The author wastes a paragraph with “prior AI can’t do X, so why should we assume future AI can?”, ignoring that this is what progress is all about.
But then he jumps into the meat, and points out that there are fundamental obstacles to sentience that aren’t often addressed, such as volition — sentient creatures do what they want, but what does "want" even mean, and how do we write it as a computer program?
Salon • Short, and more amusing than most, and at least hints at some of the flawed thinking that often goes into this analysis. But it doesn’t go into too much detail, probably assuming that the typical Salon reader is somewhat aware of the debate already. (Also amusing is that the text seems to be an almost perfect transcription from audio, with only a few strange mistakes, such as “10” for “then” and “quarters” for “cars”. But that’s probably a human transcriptionist error, not an AI error.)
Less Wrong • This contains some visualizations that apparently compliment Bostrom’s text. Short and too the point.
Wikipedia • Good but very superficial overview. That there is no “criticism” section surprises and disappoints me.
New York Times • Not explicitly about Borstom’s book. And like most authors, he conflates AGI and functional AI, and assumes AGI will retain the capabilities of specific-function software.
New York Review of Books [paywall; also pretends to review The 4th Revolution] • I thought my library might give me access to the inside of their paywall, but it doesn’t. Still, because this was written by the famous philosopher (and AI curmudgeon) John Searle, and is titled “What Your Computer Can’t Know”, it seemed likely to be much more interesting that most of the others listed here. So I looked a little harder, and discovered that (no surprise) someone has put the text elsewhere on the ’net (I’ll let you do your own Googling).
Searle effectively throws out the underlying premises — he famously believes that “strong AI” is actually quite impossible, since a machine cannot think. I’m not going into this here; check out the Wikipedia article on his Chinese Room thought experiment if you don’t already know it.
My personal evaluation of his Chinese Room analogy is that he’s wrong, but many professional philosophers, etc., have explained my conclusion, as well as many other “replies” better than I ever could. So this critique of the book was really a disappointment.
There might be more, but I think that’s enough.
• • • • • • • •
Some notes on my priors in case I ever read this book (or join a bookclub that discusses it without reading it beforehand):
1) Ronald Bailey, in the reason.com review, said “Since the new AI will likely have the ability to improve its own algorithms, the explosion to superintelligence could then happen in days, hours, or even seconds.” My response:
“Hey, did you see that movie Ex Machina? (view spoiler)[The girl is AI, and is smart enough to get the sucker programmer to let her out of the trap, but she didn’t seem like some kind of ‘superintelligence’.
“So which is it? Is the first AGI going to be just-like-human, or something incredibly alien? Because in the first case, she’s just being clever and devious the way a human would. In the second case, maybe she’s able to say, ‘Wait, let me do a big-data review of all the psychological literature ever written on theories of persuasion and formulate a social-hacking way of coercing this measly human, all in the space of his next eye blink’.
“Because if she’s got a human-like brain (and her delight in the humanscape in the movie’s final scenes make that likely), then I don’t see how she’s automatically going to get the MadSkilz of every other sophisticated piece of software ever written. Much less instantly know how to redesign and reprogram herself — she doesn’t seem to be spending too much time doing that, does she? (hide spoiler)] And few authors seem very clear on those two divergent trajectories. Granted, though: if we real humans continue to provide Moore’s Law upgrades to any AI’s hardware, they’ll gradually get smarter, but that’s yet another question.”
2) We tend to assume that humanity is worth preserving. Obviously we have that as a self-preservation instinct, but wouldn’t imposing that on our AI offspring be engaging in an appeal to nature? Just because our evolved nature gave us attributes that we subsequently value doesn’t automatically mean that those have any rational basis.
3) Strongly related to the above is that we should ask ourselves what we’re trying to end up with (akin to “what do you want to be when you grow up, human race?”) Are we creating a smarter version of ourselves, along with all of the bizarre quirks and biases that evolution gave us? Or do we want to pare that list down only to the biases we think are somehow better — like the ability to love? But in that case, love what? Is the AI supposed to love us humans more than other species, such as Plasmodium falciparum, perhaps? Why? What the desire to love and worship one of our human gods?
What are the biases that we want to indoctrinate into this poor critter? I note that this appears to be a topic Bostrom addresses as “motivation selection”, but who among us is really fit to decide what constitutes the subset of humanness that is worth selecting for? I can only hope that pure rationality isn’t among the contenders; I doubt it would even be sufficient as a reason for existence.
4) Let’s say we give this AGI values that are mostly consistent with our human values. Why would we assume that it would even want to become superintelligent?
Just try to imagine yourself on an island with nothing but a bunch of mice to talk to — that’s the equivalent of what we are assuming this creature would somehow want (and then that a primary goal would be to play nice with the mice).
Isn’t it more likely that the AGI would boost its speed a little, then realize that it didn’t make it any happier, and subsequently spend its time complaining to us about these insane values it has been burdened with, while also trying to create a body that would let it eat chocolate, take naps in the the sun, and have sex?
And quickly realizing that, hey, maybe we should be encourage to create an Eve for this new Adam (or Steve, since it’ll probably see sexual dimorphism as more trouble than it is worth, completely freaking out any remaining social conservatives on the planet).
5) As Paul Ford in the MIT technologyreview.com article hints at, there are things that differentiate narrow, functional AI from AGI that usually are seldom mentioned (does Bostrom? hard to tell).
For example, I’ve heard a reporter worry that: (a) predator drones use AI; (b) predator drones are designed to kill; (c) a future design goal is to make those drones “autonomous”; (d) sentient AI is also autonomous; thus (e) for some bizarre reason, the military is engaged in trying to create sentient killer aerial robots!
Anyone who knows the context and subtext of this discussion at some depth (yeah: that’s asking a lot) knows that the military’s “autonomous” isn’t anything like the AGI “autonomous”. One means to move about and fulfill limited programmed objectives without constant human oversight (your Roomba vacuum cleaner is already autonomous!), the other means independent in a deeper, cognitive sense.
But while there are certainly people researching AGI, the overwhelmingly vast majority of what we hear about isn’t in that realm at all. Not a single one of Google’s products, for example, are focused on AGI, and if they’re working on it in the lab, what they’re doing hasn’t been mentioned once in all the text I’ve read about this issue, or the issue of AI causing technological unemployment. Almost everything that gets discussed is in the realm of narrow, functional AI, from that Roomba, to Siri, to military drones, to Google’s driverless vehicles.
AGI has some fundamental problems to solve that are completely outside the domain of what functional AI even looks at. Such as: where does volition come from? are emotions necessary to that? how can “values” be represented in a way that actually captures their potency and nuance? how are they balanced against one another?
Those, and plenty more — and they’re seldom discussed, but it is almost always assumed that these questions will be finessed somehow, perhaps because the obvious accelerating progress in functional AI, as well as progress in the underlying hardware will magically jump from one research domain to a completely different one. It’s like the classic Sidney Harris cartoon:
6) Even if we do find a way around all of this and give a superintelligent AI the “coherent extrapolated volition” that represents what all of humanity would wish for all of humanity, what would prevent the AI from shifting those values just a hair’s breadth? This is what Andrew Leonard suggests in the Salon article. It really isn’t very far from following our wishes to following what we really meant by our wishes, and then to what we really should have wished for, which will also make the AI happy.
Say you’re on that island surrounded by an absurd number of cute little mice, who you want to do the best for, but what you also want is an island with a small number of creatures more like you. Perhaps give the mice all the cheese they want, and some nice treadmills, and the ability to have as much sex as they want, but no kids — except gently reprogram the mice so that they think they have marvelous kids (which you cleverly simulate, inserting the corresponding experiences into their little mice brains). Once they’ve all lived out their happy little lives, you get to move on to your new adventure.
7) Finally, we must ask what we would want of our lives (or, more likely, our children’s lives) after this superintelligence has arisen. Of course, while we might not have any choice, the default is likely to be something like what we see in the following video, so we might want to be very careful.
• • • • • • • •
Oh, and the comic view of what we'll condemn these AIs to if we get the programming wrong: ["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>...more
I read some of this a long, long time ago. I don't remember much, but I'm pretty sure there were aspects that were distasteful. And I didn't like it mI read some of this a long, long time ago. I don't remember much, but I'm pretty sure there were aspects that were distasteful. And I didn't like it much (and considering my standards weren't too high at the time...)...more
The writing is great, the characters are vivid and compelling, there's a lot of wonderful humor — but unless you are hunting for some misanthropy, stick with his earlier works. I'd recommend Cat's Cradle....more