At the superficial level, this is a very enjoyable story of "Two Society Girls in the West" —specifically, two restless twenty-something women bored wAt the superficial level, this is a very enjoyable story of "Two Society Girls in the West" — specifically, two restless twenty-something women bored with the idea of the future that is expected of them, and drifting through mild adventures (and flirting with dreaded spinsterhood) until this quite astonishing opportunity arises: be schoolteachers (sans any training) at the frontier deep in the Rocky Mountains.
It isn't really the frontier — this was more than twenty years after 1893, when the U.S. Census Bureau declared that the frontier had been closed ➚. But this was a community far enough off the beaten path that few services were available, and so it feels pretty close to the era Laura Ingalls, even though the nearest train depot, and it's connections to the rest of the world, are less than a day away.
The author is the Executive Editor of the New Yorker, and writes wonderfully. True, she writes in the labyrinthian style of the New Yorker's long-form journalism, with its seemingly endless recursive digressions. If you really want a linear narrative, with a constant view of the destination always in sight, then this book (and the New Yorker) probably isn't for you. If you think side trips into subsidiary topics are fine, as long as they are entertaining and at least tangentially relevant to the story, then you'll enjoy the ride.
Since our heroines are thrown into the job of teaching, folks in that profession will get an extra kick out of this, sympathizing and identifying with their crises and thrills.
But that isn't all there is to this. I'm a little embarrassed for Dorothy Wickenden, since she doesn't appear to realize that she's written a book that reinforces a mythos of America that is untrue as well as ideologically problematic.
I was forcefully reminded of this when I happened to read the New Yorker essay (yes, the New Yorker again), Out of Bethlehem: The radicalization of Joan Didion. The second half of that essay relates how Joan Didion became increasingly aware of they mythology of the American Self.
This is the legend of the pioneers in covered wagons who trekked across the Rockies and settled the state, the men and women who made the desert bloom—Didion’s ancestors. It’s a story about independence, self-reliance, and loyalty to the group. Growing up, Didion had been taught that for the generations that followed the challenge was to keep those virtues alive.
The fly in that balm is that California’s settlement had been heavily subsidized by the U.S. Government, which in this respect is the agent of commerce. Does that sound cynical? Are you aware that Adam Smith’s “Wealth of Nations” was published the same year as the Declaration of Independence, and that the United States republic suckled the ethos of capitalism from the same teat it acquired an obsession with liberty?
The story in this book is more intimate than the grand scale of California, but it is similar. The Arcadian locale of the western slope of the Colorado mountains was inaccessible to development until the U.S. government granted the wishes of those who would become the railroad barons. Yes, it was beneficial to the country, but some had power, and received outsized benefits.
From the New Yorker essay:
Everyone else was a pawn in the game, living in a fantasy of hardy individualism and cheering on economic growth that benefitted only a few. Social stability was a mirage. It lasted only as long as the going was good for business.
This is the way the story ends in Elkhead, Colorado, too. Once the coal turned out to be inadequate to sustain the interest of the capitalists, the place returned to the wilderness it had originally been. The intrepid homesteaders weren’t adequate to keep the community alive without that lifeline.
There is a second, lesser meta-narrative as well. The two women represent a class that no longer exists. When I was growing up, there existed a group of people that later became known as the Rockefeller Republicans. Wikipedia defines the term a bit differently than I remember it, so I’ll switch to “benevolent plutocrats”. This was the paternalistic class that saw it as part of their duty — a duty that came with privilege — to try to make the world a better place for those with less. They were often insufferably arrogant, and easily strayed into social Darwinism, but it was that sense of responsibility that those two young women felt when they set off to be schoolteachers. Read the tale, and it is clear they weren’t condescending elitists, but warm and caring people who worked to achieve the idealism that was rooted in a kind of noblesse oblige.
Those people appear to be gone. Why? What changed in American culture that gave the wealthy permission to cease caring in this singular way?
Nothing Daunted serves as a reminder at how seductive the mythologies of the United States are. The idea of that a person with stalwart discipline can pull themselves up by their bootstraps and become a “self-made man” is embedded deeply in the fantasy that prevents the United States from facing up to the complex creature that it has become. And along with that, it is also an enjoyable tale of youthful adventure....more
This is a fun homage to Shakespeare. The fool from Lear is the titular hero of the story, which is based loosely on Lear, with MacBeth's witches throwThis is a fun homage to Shakespeare. The fool from Lear is the titular hero of the story, which is based loosely on Lear, with MacBeth's witches thrown in to provide a different narrative thrust and a few elements of deus ex machina.
Warning: plenty profane. I suspect that if Shakespeare were writing today, he'd be totally on board, though (although he'd probably be working in the medium of cable TV).
It can't get five stars, because there's no iambic pentameter, and it doesn't get four stars, because the author makes things a little too convenient for himself at times — but, as I said, it's fun; don't expect anything profound....more
I wish I liked it more. The style of the story was passive in a way that felt quite alien. An artifact of the translation, or of something quintessentI wish I liked it more. The style of the story was passive in a way that felt quite alien. An artifact of the translation, or of something quintessential about Chinese science fiction? The book mixed its science up nicely, with deeply realistic portrayals of actual science mixed in with astonishing leaps into fictional science. Certainly two of the most intriguing weapons I've ever read about were brought to bear....more
Almost all “science fiction” books have at least one element that is critical to the story which is nevertheless fantastiSpoiler addendum added below.
Almost all “science fiction” books have at least one element that is critical to the story which is nevertheless fantastical. The faster-than-light travel and transporters in Star Trek, for example, or the Force (and FTL, and light sabers, etc.) in Star Wars. The sub genre in which this is minimized is “hard science fiction”. Generally, that’s okay. For those who appreciate thoughtful speculative fiction, the greatest affection tends to go to authors who carefully choose one fantastic element and extrapolate a plausible world consistent with that change. There are other authors who specialize in scifi that has a stronger relationship to the thriller genre, too.
Nexus is in a pretty sweet spot on that spectrum. The big fantastic element is the heavy use of nanotechnology, although that stuff is so cool that it is understandably the go-to solution for techno-magic. Anyone familiar with Star Trek TOS will remember how variations on lasers were magic (phasers, photon torpedoes, tractor beams).
But most of the rest of technology was a plausible extrapolation from today. Oh, there were two glaring exceptions that weren’t included: the effects of climate change and the increasing prevalence of AI & robotics. I mean, there were still humans driving cars in 2040! In the San Francisco Bay Area!
But this is an action-packed thriller, too. Fans of military fiction will probably get a big kick out of this. I also enjoyed the not-absurdly unlikely politics. The U.S. government doesn’t come off too well, but that’s probably quite realistic given America’s current trajectory.
I’d definitely recommend this as a quick and easy scifi snack.
Addendum: (view spoiler)[As I mentioned above, the primary fantasy element in this story is nanotechnology. Ironically, scientific news has just come out that hints at how plausible their projection is likely to be. Researchers have just created what is likely to be either the smallest transistor we’re ever likely to see, or at least approximating its magnitude, at 167 picometres in diameter. It’s just a single phthalocyanine molecule (C₃₂H₁₈N₈) surrounded by 12 indium atoms, placed on an indium arsenide crystal. (See the press coverage here or the academic article here. In the article, the caption of the image showing red blood cells states that “around 7,200 of the new transistors could fit on a single cell”. That’s an interesting size, because the 1974-era Intel 8080 was about 6,000 transistors. And while that isn’t very advanced compared to today (state-of-the-art processors are over one billion transistors), if a vast sufficient number of them could be networked, as the book asserts, then it becomes a tiny bit more plausible that a computer could be squeezed in.
Red blood cells are pretty small compared to some neurons, but not all. Red blood cells run about 6 – 8 µm, while the central soma of a neuron varies from 4 to 100 µm. So a microprocessor of roughly the complexity of an Intel 8080 might be able to hide inside of a big neuron.
That still leaves unsaid where and how it gets energy, how it communicates with other neuronal coprocessors and the outside world, and how it detects what its host neuron is actually doing.
But it is a step forward. (hide spoiler)]["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>...more
This classic of science fiction is a must read —and very fast-paced and easy to read. Asimov took on the challenge: before this book, it was believedThis classic of science fiction is a must read — and very fast-paced and easy to read. Asimov took on the challenge: before this book, it was believed that science fiction couldn't crossover to the detective genre, since science fiction could always, trivially, answer too many questions.
Asimov proved 'em wrong.
I don't remember how many books featured the odd couple detectives (one human, one robot), but it was a pretty good pairing.
I will note that Asimov does contradict himself. At one point, it is established that robots can only follow "the law", but later the robot explained his actions by arguing that there is a "higher law", above the law itself. Oops!...more
The writing is great, the characters are vivid and compelling, there's a lot of wonderful humor — but unless you are hunting for some misanthropy, stick with his earlier works. I'd recommend Cat's Cradle....more
In the coming years, your job is very likely to evaporate. That might mean now, or it might mean twenty-five or thirty years. But unless you’re extraordinarily unusual, it’ll happen.
I’m going to start by giving a few examples.
Take the profession of accountancy. I’m oversimplifying, but pretty much what an accountant does is match an entity’s financial information to the appropriate laws and rules, and then provide analysis of how well those match up, and maybe fill out some forms. Guess what? There’s nothing in there that a software program couldn’t do. In fact, many people that don’t make a lot of money already use such software to file their taxes, and every year that software gets a little more sophisticated, and a lot of techie folks use software that leaves all the other accountants doing less and less, year by year. The profession of accountant will likely be almost completely extinct within a decade (long before we see those autonomous cars everyone keeps talking about).
Let’s look at a something much tougher, like a barber or hair stylist. The job there is to examine the client’s features, ask questions about what that client wants, and suggest a style that is both feasible and desirous, and then cut hair to that style. Right now, that is about as far from what a computer could do as any profession in existence.
Well, first, speedy dexterity isn’t something that robots are too good at, except when they can be programmed to do precisely the same thing, over and over again, in which case they do much better than meager humans. And comprehension of a complex visual scene is another really tough computational problem. But if you’ve been following the pace of progress, you know that it is only a matter of time before the robots get there.
There’s a video floating around showing robots failing amusingly (but miserably, and with silly music, so we can feel superior!) during a DARPA challenge that folks are getting a kick out of. Recall, however, how very recently the idea of a robot walking around on two feet would have been absurd. Now we laugh because they sometimes fall down while trying to open doors or climb stairs or get into cars. Given the many millions going into research, how long do you think that will last?
A vast database could already be built of head shapes, facial and hair features, just by looking at the treasure trove of images already accessible via the world wide web. AI that learns which of those are considered comical and which attractive would still be a challenge, but is probably an easier task than programming Watson was for IBM. Programming a hair-cutting robot with the knowledge of what set of snips will create the desired look would be even easier, since it could be endlessly simulated purely in virtual space.
Yeah, it will take years before we see this happen, but that just means it will be at the tail end of the tsunami instead of at the beginning, where the accountants are already feeling vulnerable. (This makes me wonder, how many out-of-work accountants will be able to get jobs as hair dressers?)
There are some jobs that, as far as we can tell, are completely out of range of the robots and their AI software, but that number will get smaller and smaller over the decades, as engineers learn to make the software more sophisticated and the hardware it runs on continues to get faster.
The real sweet spot for humans is to be truly creative. That doesn’t mean anyone in a “creative field” gets a pass, however. AI is already composing quotidian music and doing the rote job of journalists. Being really creative means knowing when and how to break the rules in a way that is fundamentally unexpected. A computer never would have created John Cage’s 4’33”, for example.
The work of Thomas Kuhn, whose The Structure of Scientific Revolutions made the word “paradigm” the cliché it is today, illustrates this. Most science, like most creativity, exists within a paradigm that people in the field understand. Most “normal science”, like most normal creativity, doesn’t bust out of that paradigm. Highly sophisticated software can be taught that paradigm, and how to explore its domain, and how to evaluate whether the result of those exploration are consistent with other highly-regarded results.
How this revolution is progressing is what Rise of the Robots: Technology and the Threat of a Jobless Future is all about.
Now, you might be skeptical. This does sound, after all, like the Luddite Fallacy, doesn’t it? If you don’t know the term, it refers to the time at the beginning of the industrial revolution when crafts folk that used hand looms to weave cloth tried to keep the innovation machine looms from making them redundant. The “fallacy” part is because there have always been compensatory effects — some people lose their careers, but the gains in technological capacity and productivity make other forms of production possible, employing even more people.
So why is this time so different? Because what the machines are replacing is different.
￼ The simple machines replaced work that was dirty and dangerous. In the past century, more sophisticated machines replaced work that was dull — those robots that bolt together auto bodies, for example, replaced large numbers of men who used to get pretty good wages for doing an unremittingly boring job.
But today, machines are replacing our minds, not our muscles. More importantly, it is very unlikely that some vast new field of economic activity will suddenly appear on the horizon that will employ all of the workers made redundant — once machines are stronger and faster, more accurate and precise, more patient and (at least) as smart, what kind of job would that be?
If you need more convincing, here’s an analogy. Once upon a time, humans used animals to do our brute labor. It actually took thousands of years for us to arrange that, of course. Before we’d invented the wheel, animals could carry stuff on their backs. Reliable wheels were actually quite a stunning leap forward! Eventually, animals could do most of our hardest labor, except where our brains made us more adaptive to change or subtle details.
But think about what happened when we invented the steam engine. The first practical steam engine came along (as a stunning number of other developments) right near the end of the eighteenth century (which is related to those Luddites were rioting a few decades later). Even though it took millennia for us to learn to use animals, in most ways we’d retired them within a century. The key point is that even though those animal muscles could have still been used, there were effectively no jobs for which they were actually better than machines.
That’s where our brains are about now.
Now, there are still people that don’t believe this is going to happen. For example, in the essay How Technology Is Destroying Jobs, a professor of engineering at MIT states:
❝For that reason, Leonard says, it is easier to see how robots could work with humans than on their own in many applications. “People and robots working together can happen much more quickly than robots simply replacing humans,” he says. “That’s not going to happen in my lifetime at a massive scale. The semiautonomous taxi will still have a driver.”❞
Really? By all indications, autonomous vehicles are already safer than human drivers. Although there are still tricky situations where they could make disastrous choices, they’d still probably have a better overall safety record than us, and they’ll be getting better — we won’t, except with their help. So why would that taxi company want to pay to have a more-fallible human sitting there, bored, to second-guess the computer? It is true that people and robots working together can sometimes do better, but in far too many cases that will be a fairly short interim period, until the software engineers understand what humans are contributing and replace those final aspects — economics will create huge incentives to get the human out of the picture.
First, “step up”. Head for higher intellectual ground.
What’s the flaw here? Well, the top of the pyramid would be a great place, but there simply isn’t much room there. The example given is that, instead of using a biochemist to do a preliminary evaluation on a candidate drug, let the computers do it, and have the biochemist “pick up at the point where the math leaves off”. The difficulty is there is already a researcher doing that, and the computers are replacing the dozens of lower-tier chemists that are doing the simpler work. It’s like telling a sous-chef to “step up” and become the restaurant’s chef de cuisine! That might work for a very small number of very talented sous-chefs, but it won’t work on any large scale at all.
Second, “step aside”. Use skills that can’t be codified.
One example used here is even more absurd than the biochemist example: “Apple’s revered designer Jonathan Ive can’t download his taste to a computer.” Obviously, we can’t all be Jony Ive. But what about that accountant that was mentioned at the beginning? Can’t they learn to use personality skills to be better at interacting with the clients? Sure — but won’t all the accountants want that gig? And being the “human face” of the software might be a safe job for quite some time, it does reflect a de-skilling from the original job. This is also the category for those truly creative types that can consistently deliver outside-the-box thinking that the programmers can’t predict, and can’t be found in correlations within huge datasets.
Third, “step in”. Be the person that double-checks the software for mistakes.
An example given here involved mortgage-application evaluation software that rejected former Fed Reserve chief Ben Bernanke’s mortgage application because it couldn’t properly evaluate his career prospects on the lecture circuit. This will be a pretty sweet job category, but it isn’t because the software will continue to make “mistakes”. It’ll be because the software is taught to recognize unusual situations, and automatically funnels them to human assistants. Like the human co-pilot of an semiautonomous taxicab, there will be a lot of financial incentives to make this a very rare job, though.
Fourth, “step narrowly”. Find a sub-sub-sub-speciality that isn’t economical to automate.
The example in the article shows clearly how narrow these opportunities are: imagine being the person who specializes in matching the sellers and buyers of Dunkin’ Donuts franchises! Yeah, all the real estate agents who hate Zillow.com would love to be that guy, or his equivalent. I like my example better: you know all those Craigslist advertisements for “Two Men and a Van” to help you move furniture? The new version of those is going to be the two workers with the robotic stair-climbing mule. They’ll help city dwellers move from apartment to apartment, with one worker upstairs loading the donkey and another downstairs offloading it. It certainly will take a long time for the robotic economy to replace every little niche.
Finally, the fifth strategy is “step forward”. Write the software that puts your friends and neighbors out of work!
Writing this AI will probably be quite the growth industry for years to come. Unfortunately, it’s a pretty specialized type of programming. And even more unfortunately, there are plenty of programmers in other specialties whose jobs are starting to disappear. For example, setting up a website for a company used to be quite a labor-intensive and remunerative gig, but now there are plenty of automated suites that do the lions share of that, leaving only a job for the rarer “stepped-up” or “stepped-in” person to finish the job. There’s going to be plenty of competition in software field, too, as the simpler jobs are automated away.
What you’ve undoubtedly spotted in those five categories is obvious: while there will still be jobs in existence — and even some new ones — the numbers just won’t add up. When tens or hundreds of thousands of people in a field find their jobs being de-skilled or simply eliminated, the competition for those that remain will be nasty. (Which will drive wages down, ironically.)
There’s a lot more in Ford’s book. I really recommend it.
One thing I want to point out that he got somewhat mostly wrong, though, is in his portion on Artificial General Intelligence, or AGI. It is common for non-specialists to engage in inappropriate metaphorical thinking when talking about AI and robots. The overwhelmingly vast majority of AI and robots that we’re seeing, or will see for a long time, is functional AI — it was designed to fulfill a specific productive function. That is radically and fundamentally different than the research going into AGI, which has the goal of creating software that is as flexible and cognitively complex as the human mind — generalized intelligence.
Just because they’re both computer programs doesn’t mean that they have much in common. Both IBM’s Jeopardy-winning Watson and Google’s autonomous driving software are software programs that run on computers, but if you asked Watson to drive your car, or quizzed one of Google cars with a Jeopardy question, you’ll get no satisfaction. That might seem obvious, but far too often the end-product of AGI is magically given all the skills of any software program ever written. Ford, for example, says on page 232, “A thinking machine would, of course, continue to enjoy all the advantages that computers currently have, including the ability to calculate and access information at speeds that would be incomprehensible for us.” You really should pretty much ignore chapter 9.
Chapter 10, on the other hand, is crucial. The coming century is going to be bad enough with all that Climate Change brouhaha, without the world trying to figure out how an economy works without many or most people having jobs. Science fiction authors have been forecasting dystopian futures for a long time (the one lying behind the story in Peter Watts’ Rifters trilogy is especially harrowing), and we’re really going to want to avoid that. You’ll quickly note that raising the minimum wage doesn’t help — in fact, it creates incentives to automate that much more quickly. Plans that provide a guaranteed minimum income make more sense, although anyone familiar with the political climate in the United States won’t give that much chance of happening.
Frankly, I’ve been telling anyone I care about who has kids to make sure they’ve got the know-how and land to garden, but I’m pretty sure I’m considered an alarmist.
I think it is somewhat curious that vampires don't seem to be a la mode as they once were. Werewolves are ascendent,Almost the perfect piece of fluff.
I think it is somewhat curious that vampires don't seem to be a la mode as they once were. Werewolves are ascendent, such as when this book was written. We can also see that in other areas of fashion — in nineties and aughts, the androgynous look was very in. Remember the coolest guys were the metrosexuals? Now, all those guys seem to have beards, and are wearing flannel shirts. I dearly hope we don't head into chupacabra territory next.
Amusingly, the back of this ebook edition has questions intended to help a bookclub have a thoughtful discussion after reading this. I can see how some of them might provoke an interesting discussion, but the only one that is actually provocative would be the one about the vampires engaging in euthanasia.
I'd twist the question around a little bit. Imagine that there were, indeed, vampires among us, and that they need to consume human blood to live. First, would you be willing to donate blood to feed them? What if it had to be "fresh" — i.e., not refrigerated from the bloodbank?
If an actual bite was a physically ecstatic experience for the donor, would that increase your interest in being a direct donor? As in someone actually sinking fangs into your neck, knowing that you'll heal instantly and have no chance of acquiring any disease?
Okay, what about if you were terminally ill, and this appeared to be the most peaceful means of dying?
Would you vote to allow it as a form of capital punishment?...more
I was just using the math portion to drill myself on high school math. Having once worked in the test-prep biz, I understand that it can be difficultI was just using the math portion to drill myself on high school math. Having once worked in the test-prep biz, I understand that it can be difficult to formulate questions precisely the way the test designers do. But there was a handful of questions in here with ambiguous prompts, which is the primary job....more
This is a very simple book — well, a graphic novel, except its biographical, so it isn't a novel.
Anyway, if you have relatively ancient people in youThis is a very simple book — well, a graphic novel, except its biographical, so it isn't a novel.
Anyway, if you have relatively ancient people in your life — or if you are one of those relatively ancient folks, or even if you're just curious — this is likely to be one of the least unpleasant ways of introducing certain topics.
I'm very happy the bookclub's dictatress selected this, the first novel in the author's "Prostitution Trilogy", because the third, The Royal Family,I'm very happy the bookclub's dictatress selected this, the first novel in the author's "Prostitution Trilogy", because the third, The Royal Family, is 800 pages (divided into 593 chapters).