At the superficial level, this is a very enjoyable story of "Two Society Girls in the West" —specifically, two restless twenty-something women bored wAt the superficial level, this is a very enjoyable story of "Two Society Girls in the West" — specifically, two restless twenty-something women bored with the idea of the future that is expected of them, and drifting through mild adventures (and flirting with dreaded spinsterhood) until this quite astonishing opportunity arises: be schoolteachers (sans any training) at the frontier deep in the Rocky Mountains.
It isn't really the frontier — this was more than twenty years after 1893, when the U.S. Census Bureau declared that the frontier had been closed ➚. But this was a community far enough off the beaten path that few services were available, and so it feels pretty close to the era Laura Ingalls, even though the nearest train depot, and it's connections to the rest of the world, are less than a day away.
The author is the Executive Editor of the New Yorker, and writes wonderfully. True, she writes in the labyrinthian style of the New Yorker's long-form journalism, with its seemingly endless recursive digressions. If you really want a linear narrative, with a constant view of the destination always in sight, then this book (and the New Yorker) probably isn't for you. If you think side trips into subsidiary topics are fine, as long as they are entertaining and at least tangentially relevant to the story, then you'll enjoy the ride.
Since our heroines are thrown into the job of teaching, folks in that profession will get an extra kick out of this, sympathizing and identifying with their crises and thrills.
But that isn't all there is to this. I'm a little embarrassed for Dorothy Wickenden, since she doesn't appear to realize that she's written a book that reinforces a mythos of America that is untrue as well as ideologically problematic.
I was forcefully reminded of this when I happened to read the New Yorker essay (yes, the New Yorker again), Out of Bethlehem: The radicalization of Joan Didion. The second half of that essay relates how Joan Didion became increasingly aware of they mythology of the American Self.
This is the legend of the pioneers in covered wagons who trekked across the Rockies and settled the state, the men and women who made the desert bloom—Didion’s ancestors. It’s a story about independence, self-reliance, and loyalty to the group. Growing up, Didion had been taught that for the generations that followed the challenge was to keep those virtues alive.
The fly in that balm is that California’s settlement had been heavily subsidized by the U.S. Government, which in this respect is the agent of commerce. Does that sound cynical? Are you aware that Adam Smith’s “Wealth of Nations” was published the same year as the Declaration of Independence, and that the United States republic suckled the ethos of capitalism from the same teat it acquired an obsession with liberty?
The story in this book is more intimate than the grand scale of California, but it is similar. The Arcadian locale of the western slope of the Colorado mountains was inaccessible to development until the U.S. government granted the wishes of those who would become the railroad barons. Yes, it was beneficial to the country, but some had power, and received outsized benefits.
From the New Yorker essay:
Everyone else was a pawn in the game, living in a fantasy of hardy individualism and cheering on economic growth that benefitted only a few. Social stability was a mirage. It lasted only as long as the going was good for business.
This is the way the story ends in Elkhead, Colorado, too. Once the coal turned out to be inadequate to sustain the interest of the capitalists, the place returned to the wilderness it had originally been. The intrepid homesteaders weren’t adequate to keep the community alive without that lifeline.
There is a second, lesser meta-narrative as well. The two women represent a class that no longer exists. When I was growing up, there existed a group of people that later became known as the Rockefeller Republicans. Wikipedia defines the term a bit differently than I remember it, so I’ll switch to “benevolent plutocrats”. This was the paternalistic class that saw it as part of their duty — a duty that came with privilege — to try to make the world a better place for those with less. They were often insufferably arrogant, and easily strayed into social Darwinism, but it was that sense of responsibility that those two young women felt when they set off to be schoolteachers. Read the tale, and it is clear they weren’t condescending elitists, but warm and caring people who worked to achieve the idealism that was rooted in a kind of noblesse oblige.
Those people appear to be gone. Why? What changed in American culture that gave the wealthy permission to cease caring in this singular way?
Nothing Daunted serves as a reminder at how seductive the mythologies of the United States are. The idea of that a person with stalwart discipline can pull themselves up by their bootstraps and become a “self-made man” is embedded deeply in the fantasy that prevents the United States from facing up to the complex creature that it has become. And along with that, it is also an enjoyable tale of youthful adventure....more
This is a collection of “stories and visions for a better future”, so as I make my way through it, I expect to be updating this.
But to begin:
The prefaThis is a collection of “stories and visions for a better future”, so as I make my way through it, I expect to be updating this.
But to begin:
The preface and the first story are written by Neal Stephenson, a white American male just a few months younger than I am. Reading both of those pieces left me somewhat disappointed with him, frankly.
First, the preface, titled “Innovation Starvation”. Stephenson relates how he feels let down that the United States no longer appears to be the creative engine of thrilling new technologies that he fondly recalls from his youth. The now cliched narrative arc from NASA’s Gemini missions and moon landing to the retirement of the Space Shuttle is emblematic. What galvanized him into engaging with this was the oil spill of the Deepwater Horizon in 2010 — the people of the United States had been told almost forty years before, in the first oil crisis, that petroleum was politically problematic, yet we’d done very little about it (other than to fight wars and subsidized nations in the middle east).
The goal of the book is to provide conceptual templates to future innovators, the same way the writers of the Golden Age of science fiction had mesmerized and energized the generation of scientists and engineers behind NASA.
The story he writes, Atmosphæ Incognita, is about the engineering of a twenty-kilometer tall building. It is a good story, similar to Ron Howard’s Apollo 13 in its focus on the technology. It felt like something written in the 1950s, though (well before the actual mission of Apollo 13 in 1970). The first-person narrator is a lesbian, true, but that doesn’t really seem to matter. In one way, that’s great. Letting people just be themselves is quite post-modern. But that also means that the only element that hinted at being interesting was set aside, and so the entire story ends up being rather bland. Yeah, the technology is interesting, and the failure of some of the technology lends some interest, but no enticing drama.
Which brings me to why I’m mildly disappointed in Stephenson. I thought he would be clever enough to understand that technology isn’t going to save the United States, and that we can’t invent our way out of our malaise. Well, yeah, sure: some fascinating new toys might distract us from the adult problems we’re confronting, and might even boost the economy enough to mitigate some of them, but that isn’t much.
The problems we’re facing are cultural and sociological, and don’t have simple solutions — we really don’t know whether they have solution at all (if you think you know of a solution, then you just need to take a step backwards and recognize that you didn’t see that it is entangled within an even larger problem).
I’ll have to see whether the other stories largely rest on similar false illusions....more
This is a fun homage to Shakespeare. The fool from Lear is the titular hero of the story, which is based loosely on Lear, with MacBeth's witches throwThis is a fun homage to Shakespeare. The fool from Lear is the titular hero of the story, which is based loosely on Lear, with MacBeth's witches thrown in to provide a different narrative thrust and a few elements of deus ex machina.
Warning: plenty profane. I suspect that if Shakespeare were writing today, he'd be totally on board, though (although he'd probably be working in the medium of cable TV).
It can't get five stars, because there's no iambic pentameter, and it doesn't get four stars, because the author makes things a little too convenient for himself at times — but, as I said, it's fun; don't expect anything profound....more
I wish I liked it more. The style of the story was passive in a way that felt quite alien. An artifact of the translation, or of something quintessentI wish I liked it more. The style of the story was passive in a way that felt quite alien. An artifact of the translation, or of something quintessential about Chinese science fiction? The book mixed its science up nicely, with deeply realistic portrayals of actual science mixed in with astonishing leaps into fictional science. Certainly two of the most intriguing weapons I've ever read about were brought to bear....more
Almost all “science fiction” books have at least one element that is critical to the story which is nevertheless fantastiSpoiler addendum added below.
Almost all “science fiction” books have at least one element that is critical to the story which is nevertheless fantastical. The faster-than-light travel and transporters in Star Trek, for example, or the Force (and FTL, and light sabers, etc.) in Star Wars. The sub genre in which this is minimized is “hard science fiction”. Generally, that’s okay. For those who appreciate thoughtful speculative fiction, the greatest affection tends to go to authors who carefully choose one fantastic element and extrapolate a plausible world consistent with that change. There are other authors who specialize in scifi that has a stronger relationship to the thriller genre, too.
Nexus is in a pretty sweet spot on that spectrum. The big fantastic element is the heavy use of nanotechnology, although that stuff is so cool that it is understandably the go-to solution for techno-magic. Anyone familiar with Star Trek TOS will remember how variations on lasers were magic (phasers, photon torpedoes, tractor beams).
But most of the rest of technology was a plausible extrapolation from today. Oh, there were two glaring exceptions that weren’t included: the effects of climate change and the increasing prevalence of AI & robotics. I mean, there were still humans driving cars in 2040! In the San Francisco Bay Area!
But this is an action-packed thriller, too. Fans of military fiction will probably get a big kick out of this. I also enjoyed the not-absurdly unlikely politics. The U.S. government doesn’t come off too well, but that’s probably quite realistic given America’s current trajectory.
I’d definitely recommend this as a quick and easy scifi snack.
Addendum: (view spoiler)[As I mentioned above, the primary fantasy element in this story is nanotechnology. Ironically, scientific news has just come out that hints at how plausible their projection is likely to be. Researchers have just created what is likely to be either the smallest transistor we’re ever likely to see, or at least approximating its magnitude, at 167 picometres in diameter. It’s just a single phthalocyanine molecule (C₃₂H₁₈N₈) surrounded by 12 indium atoms, placed on an indium arsenide crystal. (See the press coverage here or the academic article here. In the article, the caption of the image showing red blood cells states that “around 7,200 of the new transistors could fit on a single cell”. That’s an interesting size, because the 1974-era Intel 8080 was about 6,000 transistors. And while that isn’t very advanced compared to today (state-of-the-art processors are over one billion transistors), if a vast sufficient number of them could be networked, as the book asserts, then it becomes a tiny bit more plausible that a computer could be squeezed in.
Red blood cells are pretty small compared to some neurons, but not all. Red blood cells run about 6 – 8 µm, while the central soma of a neuron varies from 4 to 100 µm. So a microprocessor of roughly the complexity of an Intel 8080 might be able to hide inside of a big neuron.
That still leaves unsaid where and how it gets energy, how it communicates with other neuronal coprocessors and the outside world, and how it detects what its host neuron is actually doing.
But it is a step forward. (hide spoiler)]["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>...more
This classic of science fiction is a must read —and very fast-paced and easy to read. Asimov took on the challenge: before this book, it was believedThis classic of science fiction is a must read — and very fast-paced and easy to read. Asimov took on the challenge: before this book, it was believed that science fiction couldn't crossover to the detective genre, since science fiction could always, trivially, answer too many questions.
Asimov proved 'em wrong.
I don't remember how many books featured the odd couple detectives (one human, one robot), but it was a pretty good pairing.
I will note that Asimov does contradict himself. At one point, it is established that robots can only follow "the law", but later the robot explained his actions by arguing that there is a "higher law", above the law itself. Oops!...more
• The foregoing doesn’t explicitly link to Bostrom’s book so this might not be right — it spends too much time on Kurzweil’s thesis, so I suspect it’s not the correct one. And the author has drunk the Kurzweil Kool-Aid and it enthusiastically peddling it to others without any critical evaluations. Of course, everything here lies at the intersection of advanced software engineering, AI research, neurology, cognitive science, economics, and maybe even a few other fields, which is why so many very intelligent and highly educated people can talk about it and be fundamentally off track.
Oh, but there’s plenty more, anyway:
The Telegraph UK • I’m bumping this one to the top because it presents both the problem Bostrom is dealing with as well as the difficulties of his text in a more engaging style.
The Economist • Good overview. Doesn’t go far enough into details to make any errors, but a bit deeper than some of the other short reviews.
The Guardian [also discusses A Rough Ride to the Future] • Short and superficial, but good. The Lovelock portion is amusing, calling out his conclusion that manmade climate change isn’t an existential threat, but that while it “could mean a bumpy ride over the next century or two, with billions dead, it is not necessarily the end of the world”. Personally, I agree, but think that the concomitant economic collapse puts the timeframe at many more centuries.
Financial Times • The Guardian article, above, cites that “Bostrom reports that many leading researchers in AI place a 90% probability on the development of human-level machine intelligence by between 2075 and 2090”, whereas this Financial Times article says “About half the world’s AI specialists expect human-level machine intelligence to be achieved by 2040, according to recent surveys, and 90 per cent say it will arrive by 2075.”
That’s quite a difference, although they might be reporting different ends of a confidence range, I suppose. But the second half of the FT quote makes me suspicious the reviewer is tossing in some minor distortion to slightly sensationalize the story (which might be worthwhile). There is somewhat more detail than some of the other reviews, but not much. The style is more evocative of the threat, though.
Reason.com • This one isn’t only a review, since the author also injects a few opinions about what might or might be possible (based, presumably, on his exposure to other arguments as a science writer). In covering more ground, though, the essay makes implicit assumptions which might or might not be in Bostrom’s book.
For example, he says, “Since the new AI will likely have the ability to improve its own algorithms, the explosion to superintelligence could then happen in days, hours, or even seconds.” This is a very common assumption that really needs to be carefully examined, though. If the goal of AGI is to create a being that thinks more-or-less like a human, why would it have any special skill in improving itself? We humans are really very good at that, after all.
I especially like that his essay starts and ends with references to Frank Herbert’s Dune, which (among its other excellences) envisions a human prohibition on machines that think. Something like this appears, to me, to be one of the few ways that leave the human race in existence and in control of its own destiny for the long term, and I even perceive a path to it, although hopefully without quite as much war. The explosion of functional AI (called “narrow” by some) seems likely to devastate human employment in the coming decades, which will hopefully be before any superintelligence as been created as our replacement and/or ruler. It is plausible that our reaction to the first crisis might be something that prevents the second. Good luck, kids!
This essay thankfully has some critical thinking applied to some of the assumptions that appear to be in Bostrom’s book. The author wastes a paragraph with “prior AI can’t do X, so why should we assume future AI can?”, ignoring that this is what progress is all about.
But then he jumps into the meat, and points out that there are fundamental obstacles to sentience that aren’t often addressed, such as volition — sentient creatures do what they want, but what does "want" even mean, and how do we write it as a computer program?
Salon • Short, and more amusing than most, and at least hints at some of the flawed thinking that often goes into this analysis. But it doesn’t go into too much detail, probably assuming that the typical Salon reader is somewhat aware of the debate already. (Also amusing is that the text seems to be an almost perfect transcription from audio, with only a few strange mistakes, such as “10” for “then” and “quarters” for “cars”. But that’s probably a human transcriptionist error, not an AI error.)
Less Wrong • This contains some visualizations that apparently compliment Bostrom’s text. Short and too the point.
Wikipedia • Good but very superficial overview. That there is no “criticism” section surprises and disappoints me.
New York Times • Not explicitly about Borstom’s book. And like most authors, he conflates AGI and functional AI, and assumes AGI will retain the capabilities of specific-function software.
New York Review of Books [paywall; also pretends to review The 4th Revolution] • I thought my library might give me access to the inside of their paywall, but it doesn’t. Still, because this was written by the famous philosopher (and AI curmudgeon) John Searle, and is titled “What Your Computer Can’t Know”, it seemed likely to be much more interesting that most of the others listed here. So I looked a little harder, and discovered that (no surprise) someone has put the text elsewhere on the ’net (I’ll let you do your own Googling).
Searle effectively throws out the underlying premises — he famously believes that “strong AI” is actually quite impossible, since a machine cannot think. I’m not going into this here; check out the Wikipedia article on his Chinese Room thought experiment if you don’t already know it.
My personal evaluation of his Chinese Room analogy is that he’s wrong, but many professional philosophers, etc., have explained my conclusion, as well as many other “replies” better than I ever could. So this critique of the book was really a disappointment.
There might be more, but I think that’s enough.
• • • • • • • •
Some notes on my priors in case I ever read this book (or join a bookclub that discusses it without reading it beforehand):
1) Ronald Bailey, in the reason.com review, said “Since the new AI will likely have the ability to improve its own algorithms, the explosion to superintelligence could then happen in days, hours, or even seconds.” My response:
“Hey, did you see that movie Ex Machina? (view spoiler)[The girl is AI, and is smart enough to get the sucker programmer to let her out of the trap, but she didn’t seem like some kind of ‘superintelligence’.
“So which is it? Is the first AGI going to be just-like-human, or something incredibly alien? Because in the first case, she’s just being clever and devious the way a human would. In the second case, maybe she’s able to say, ‘Wait, let me do a big-data review of all the psychological literature ever written on theories of persuasion and formulate a social-hacking way of coercing this measly human, all in the space of his next eye blink’.
“Because if she’s got a human-like brain (and her delight in the humanscape in the movie’s final scenes make that likely), then I don’t see how she’s automatically going to get the MadSkilz of every other sophisticated piece of software ever written. Much less instantly know how to redesign and reprogram herself — she doesn’t seem to be spending too much time doing that, does she? (hide spoiler)] And few authors seem very clear on those two divergent trajectories. Granted, though: if we real humans continue to provide Moore’s Law upgrades to any AI’s hardware, they’ll gradually get smarter, but that’s yet another question.”
2) We tend to assume that humanity is worth preserving. Obviously we have that as a self-preservation instinct, but wouldn’t imposing that on our AI offspring be engaging in an appeal to nature? Just because our evolved nature gave us attributes that we subsequently value doesn’t automatically mean that those have any rational basis.
3) Strongly related to the above is that we should ask ourselves what we’re trying to end up with (akin to “what do you want to be when you grow up, human race?”) Are we creating a smarter version of ourselves, along with all of the bizarre quirks and biases that evolution gave us? Or do we want to pare that list down only to the biases we think are somehow better — like the ability to love? But in that case, love what? Is the AI supposed to love us humans more than other species, such as Plasmodium falciparum, perhaps? Why? What the desire to love and worship one of our human gods?
What are the biases that we want to indoctrinate into this poor critter? I note that this appears to be a topic Bostrom addresses as “motivation selection”, but who among us is really fit to decide what constitutes the subset of humanness that is worth selecting for? I can only hope that pure rationality isn’t among the contenders; I doubt it would even be sufficient as a reason for existence.
4) Let’s say we give this AGI values that are mostly consistent with our human values. Why would we assume that it would even want to become superintelligent?
Just try to imagine yourself on an island with nothing but a bunch of mice to talk to — that’s the equivalent of what we are assuming this creature would somehow want (and then that a primary goal would be to play nice with the mice).
Isn’t it more likely that the AGI would boost its speed a little, then realize that it didn’t make it any happier, and subsequently spend its time complaining to us about these insane values it has been burdened with, while also trying to create a body that would let it eat chocolate, take naps in the the sun, and have sex?
And quickly realizing that, hey, maybe we should be encourage to create an Eve for this new Adam (or Steve, since it’ll probably see sexual dimorphism as more trouble than it is worth, completely freaking out any remaining social conservatives on the planet).
5) As Paul Ford in the MIT technologyreview.com article hints at, there are things that differentiate narrow, functional AI from AGI that usually are seldom mentioned (does Bostrom? hard to tell).
For example, I’ve heard a reporter worry that: (a) predator drones use AI; (b) predator drones are designed to kill; (c) a future design goal is to make those drones “autonomous”; (d) sentient AI is also autonomous; thus (e) for some bizarre reason, the military is engaged in trying to create sentient killer aerial robots!
Anyone who knows the context and subtext of this discussion at some depth (yeah: that’s asking a lot) knows that the military’s “autonomous” isn’t anything like the AGI “autonomous”. One means to move about and fulfill limited programmed objectives without constant human oversight (your Roomba vacuum cleaner is already autonomous!), the other means independent in a deeper, cognitive sense.
But while there are certainly people researching AGI, the overwhelmingly vast majority of what we hear about isn’t in that realm at all. Not a single one of Google’s products, for example, are focused on AGI, and if they’re working on it in the lab, what they’re doing hasn’t been mentioned once in all the text I’ve read about this issue, or the issue of AI causing technological unemployment. Almost everything that gets discussed is in the realm of narrow, functional AI, from that Roomba, to Siri, to military drones, to Google’s driverless vehicles.
AGI has some fundamental problems to solve that are completely outside the domain of what functional AI even looks at. Such as: where does volition come from? are emotions necessary to that? how can “values” be represented in a way that actually captures their potency and nuance? how are they balanced against one another?
Those, and plenty more — and they’re seldom discussed, but it is almost always assumed that these questions will be finessed somehow, perhaps because the obvious accelerating progress in functional AI, as well as progress in the underlying hardware will magically jump from one research domain to a completely different one. It’s like the classic Sidney Harris cartoon:
6) Even if we do find a way around all of this and give a superintelligent AI the “coherent extrapolated volition” that represents what all of humanity would wish for all of humanity, what would prevent the AI from shifting those values just a hair’s breadth? This is what Andrew Leonard suggests in the Salon article. It really isn’t very far from following our wishes to following what we really meant by our wishes, and then to what we really should have wished for, which will also make the AI happy.
Say you’re on that island surrounded by an absurd number of cute little mice, who you want to do the best for, but what you also want is an island with a small number of creatures more like you. Perhaps give the mice all the cheese they want, and some nice treadmills, and the ability to have as much sex as they want, but no kids — except gently reprogram the mice so that they think they have marvelous kids (which you cleverly simulate, inserting the corresponding experiences into their little mice brains). Once they’ve all lived out their happy little lives, you get to move on to your new adventure.
7) Finally, we must ask what we would want of our lives (or, more likely, our children’s lives) after this superintelligence has arisen. Of course, while we might not have any choice, the default is likely to be something like what we see in the following video, so we might want to be very careful.
• • • • • • • •
Oh, and the comic view of what we'll condemn these AIs to if we get the programming wrong: ["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>...more
I read some of this a long, long time ago. I don't remember much, but I'm pretty sure there were aspects that were distasteful. And I didn't like it mI read some of this a long, long time ago. I don't remember much, but I'm pretty sure there were aspects that were distasteful. And I didn't like it much (and considering my standards weren't too high at the time...)...more
The writing is great, the characters are vivid and compelling, there's a lot of wonderful humor — but unless you are hunting for some misanthropy, stick with his earlier works. I'd recommend Cat's Cradle....more
In the coming years, your job is very likely to evaporate. That might mean now, or it might mean twenty-five or thirty years. But unless you’re extraordinarily unusual, it’ll happen.
I’m going to start by giving a few examples.
Take the profession of accountancy. I’m oversimplifying, but pretty much what an accountant does is match an entity’s financial information to the appropriate laws and rules, and then provide analysis of how well those match up, and maybe fill out some forms. Guess what? There’s nothing in there that a software program couldn’t do. In fact, many people that don’t make a lot of money already use such software to file their taxes, and every year that software gets a little more sophisticated, and a lot of techie folks use software that leaves all the other accountants doing less and less, year by year. The profession of accountant will likely be almost completely extinct within a decade (long before we see those autonomous cars everyone keeps talking about).
Let’s look at a something much tougher, like a barber or hair stylist. The job there is to examine the client’s features, ask questions about what that client wants, and suggest a style that is both feasible and desirous, and then cut hair to that style. Right now, that is about as far from what a computer could do as any profession in existence.
Well, first, speedy dexterity isn’t something that robots are too good at, except when they can be programmed to do precisely the same thing, over and over again, in which case they do much better than meager humans. And comprehension of a complex visual scene is another really tough computational problem. But if you’ve been following the pace of progress, you know that it is only a matter of time before the robots get there.
There’s a video floating around showing robots failing amusingly (but miserably, and with silly music, so we can feel superior!) during a DARPA challenge that folks are getting a kick out of. Recall, however, how very recently the idea of a robot walking around on two feet would have been absurd. Now we laugh because they sometimes fall down while trying to open doors or climb stairs or get into cars. Given the many millions going into research, how long do you think that will last?
A vast database could already be built of head shapes, facial and hair features, just by looking at the treasure trove of images already accessible via the world wide web. AI that learns which of those are considered comical and which attractive would still be a challenge, but is probably an easier task than programming Watson was for IBM. Programming a hair-cutting robot with the knowledge of what set of snips will create the desired look would be even easier, since it could be endlessly simulated purely in virtual space.
Yeah, it will take years before we see this happen, but that just means it will be at the tail end of the tsunami instead of at the beginning, where the accountants are already feeling vulnerable. (This makes me wonder, how many out-of-work accountants will be able to get jobs as hair dressers?)
There are some jobs that, as far as we can tell, are completely out of range of the robots and their AI software, but that number will get smaller and smaller over the decades, as engineers learn to make the software more sophisticated and the hardware it runs on continues to get faster.
The real sweet spot for humans is to be truly creative. That doesn’t mean anyone in a “creative field” gets a pass, however. AI is already composing quotidian music and doing the rote job of journalists. Being really creative means knowing when and how to break the rules in a way that is fundamentally unexpected. A computer never would have created John Cage’s 4’33”, for example.
The work of Thomas Kuhn, whose The Structure of Scientific Revolutions made the word “paradigm” the cliché it is today, illustrates this. Most science, like most creativity, exists within a paradigm that people in the field understand. Most “normal science”, like most normal creativity, doesn’t bust out of that paradigm. Highly sophisticated software can be taught that paradigm, and how to explore its domain, and how to evaluate whether the result of those exploration are consistent with other highly-regarded results.
How this revolution is progressing is what Rise of the Robots: Technology and the Threat of a Jobless Future is all about.
Now, you might be skeptical. This does sound, after all, like the Luddite Fallacy, doesn’t it? If you don’t know the term, it refers to the time at the beginning of the industrial revolution when crafts folk that used hand looms to weave cloth tried to keep the innovation machine looms from making them redundant. The “fallacy” part is because there have always been compensatory effects — some people lose their careers, but the gains in technological capacity and productivity make other forms of production possible, employing even more people.
So why is this time so different? Because what the machines are replacing is different.
￼ The simple machines replaced work that was dirty and dangerous. In the past century, more sophisticated machines replaced work that was dull — those robots that bolt together auto bodies, for example, replaced large numbers of men who used to get pretty good wages for doing an unremittingly boring job.
But today, machines are replacing our minds, not our muscles. More importantly, it is very unlikely that some vast new field of economic activity will suddenly appear on the horizon that will employ all of the workers made redundant — once machines are stronger and faster, more accurate and precise, more patient and (at least) as smart, what kind of job would that be?
If you need more convincing, here’s an analogy. Once upon a time, humans used animals to do our brute labor. It actually took thousands of years for us to arrange that, of course. Before we’d invented the wheel, animals could carry stuff on their backs. Reliable wheels were actually quite a stunning leap forward! Eventually, animals could do most of our hardest labor, except where our brains made us more adaptive to change or subtle details.
But think about what happened when we invented the steam engine. The first practical steam engine came along (as a stunning number of other developments) right near the end of the eighteenth century (which is related to those Luddites were rioting a few decades later). Even though it took millennia for us to learn to use animals, in most ways we’d retired them within a century. The key point is that even though those animal muscles could have still been used, there were effectively no jobs for which they were actually better than machines.
That’s where our brains are about now.
Now, there are still people that don’t believe this is going to happen. For example, in the essay How Technology Is Destroying Jobs, a professor of engineering at MIT states:
❝For that reason, Leonard says, it is easier to see how robots could work with humans than on their own in many applications. “People and robots working together can happen much more quickly than robots simply replacing humans,” he says. “That’s not going to happen in my lifetime at a massive scale. The semiautonomous taxi will still have a driver.”❞
Really? By all indications, autonomous vehicles are already safer than human drivers. Although there are still tricky situations where they could make disastrous choices, they’d still probably have a better overall safety record than us, and they’ll be getting better — we won’t, except with their help. So why would that taxi company want to pay to have a more-fallible human sitting there, bored, to second-guess the computer? It is true that people and robots working together can sometimes do better, but in far too many cases that will be a fairly short interim period, until the software engineers understand what humans are contributing and replace those final aspects — economics will create huge incentives to get the human out of the picture.
First, “step up”. Head for higher intellectual ground.
What’s the flaw here? Well, the top of the pyramid would be a great place, but there simply isn’t much room there. The example given is that, instead of using a biochemist to do a preliminary evaluation on a candidate drug, let the computers do it, and have the biochemist “pick up at the point where the math leaves off”. The difficulty is there is already a researcher doing that, and the computers are replacing the dozens of lower-tier chemists that are doing the simpler work. It’s like telling a sous-chef to “step up” and become the restaurant’s chef de cuisine! That might work for a very small number of very talented sous-chefs, but it won’t work on any large scale at all.
Second, “step aside”. Use skills that can’t be codified.
One example used here is even more absurd than the biochemist example: “Apple’s revered designer Jonathan Ive can’t download his taste to a computer.” Obviously, we can’t all be Jony Ive. But what about that accountant that was mentioned at the beginning? Can’t they learn to use personality skills to be better at interacting with the clients? Sure — but won’t all the accountants want that gig? And being the “human face” of the software might be a safe job for quite some time, it does reflect a de-skilling from the original job. This is also the category for those truly creative types that can consistently deliver outside-the-box thinking that the programmers can’t predict, and can’t be found in correlations within huge datasets.
Third, “step in”. Be the person that double-checks the software for mistakes.
An example given here involved mortgage-application evaluation software that rejected former Fed Reserve chief Ben Bernanke’s mortgage application because it couldn’t properly evaluate his career prospects on the lecture circuit. This will be a pretty sweet job category, but it isn’t because the software will continue to make “mistakes”. It’ll be because the software is taught to recognize unusual situations, and automatically funnels them to human assistants. Like the human co-pilot of an semiautonomous taxicab, there will be a lot of financial incentives to make this a very rare job, though.
Fourth, “step narrowly”. Find a sub-sub-sub-speciality that isn’t economical to automate.
The example in the article shows clearly how narrow these opportunities are: imagine being the person who specializes in matching the sellers and buyers of Dunkin’ Donuts franchises! Yeah, all the real estate agents who hate Zillow.com would love to be that guy, or his equivalent. I like my example better: you know all those Craigslist advertisements for “Two Men and a Van” to help you move furniture? The new version of those is going to be the two workers with the robotic stair-climbing mule. They’ll help city dwellers move from apartment to apartment, with one worker upstairs loading the donkey and another downstairs offloading it. It certainly will take a long time for the robotic economy to replace every little niche.
Finally, the fifth strategy is “step forward”. Write the software that puts your friends and neighbors out of work!
Writing this AI will probably be quite the growth industry for years to come. Unfortunately, it’s a pretty specialized type of programming. And even more unfortunately, there are plenty of programmers in other specialties whose jobs are starting to disappear. For example, setting up a website for a company used to be quite a labor-intensive and remunerative gig, but now there are plenty of automated suites that do the lions share of that, leaving only a job for the rarer “stepped-up” or “stepped-in” person to finish the job. There’s going to be plenty of competition in software field, too, as the simpler jobs are automated away.
What you’ve undoubtedly spotted in those five categories is obvious: while there will still be jobs in existence — and even some new ones — the numbers just won’t add up. When tens or hundreds of thousands of people in a field find their jobs being de-skilled or simply eliminated, the competition for those that remain will be nasty. (Which will drive wages down, ironically.)
There’s a lot more in Ford’s book. I really recommend it.
One thing I want to point out that he got somewhat mostly wrong, though, is in his portion on Artificial General Intelligence, or AGI. It is common for non-specialists to engage in inappropriate metaphorical thinking when talking about AI and robots. The overwhelmingly vast majority of AI and robots that we’re seeing, or will see for a long time, is functional AI — it was designed to fulfill a specific productive function. That is radically and fundamentally different than the research going into AGI, which has the goal of creating software that is as flexible and cognitively complex as the human mind — generalized intelligence.
Just because they’re both computer programs doesn’t mean that they have much in common. Both IBM’s Jeopardy-winning Watson and Google’s autonomous driving software are software programs that run on computers, but if you asked Watson to drive your car, or quizzed one of Google cars with a Jeopardy question, you’ll get no satisfaction. That might seem obvious, but far too often the end-product of AGI is magically given all the skills of any software program ever written. Ford, for example, says on page 232, “A thinking machine would, of course, continue to enjoy all the advantages that computers currently have, including the ability to calculate and access information at speeds that would be incomprehensible for us.” You really should pretty much ignore chapter 9.
Chapter 10, on the other hand, is crucial. The coming century is going to be bad enough with all that Climate Change brouhaha, without the world trying to figure out how an economy works without many or most people having jobs. Science fiction authors have been forecasting dystopian futures for a long time (the one lying behind the story in Peter Watts’ Rifters trilogy is especially harrowing), and we’re really going to want to avoid that. You’ll quickly note that raising the minimum wage doesn’t help — in fact, it creates incentives to automate that much more quickly. Plans that provide a guaranteed minimum income make more sense, although anyone familiar with the political climate in the United States won’t give that much chance of happening.
Frankly, I’ve been telling anyone I care about who has kids to make sure they’ve got the know-how and land to garden, but I’m pretty sure I’m considered an alarmist.