More on this book
Community
Kindle Notes & Highlights
These tools will only temporarily augment human intelligence. They will make us smarter and more efficient for a time, and will unlock enormous amounts of economic growth, but they are fundamentally labor replacing. They will eventually do cognitive labor more efficiently and more cheaply than many people working in administration, data entry, customer service (including making and receiving phone calls), writing emails, drafting summaries, translating documents, creating content, copywriting, and so on. In the face of an abundance of ultra-low-cost equivalents, the days of this kind of
...more
What happens when many, perhaps the majority, of the tasks required to operate a corporation, or a government department, can be run more efficiently by machines? Who will benefit first from these dynamics, and what will they likely do with this new power?
In a few decades, I predict most physical products will look like services. Zero marginal cost production and distribution will make it possible. The migration to the cloud will become all-encompassing, and the trend will be spurred by the ascendancy of low-code and no-code software, the rise of bio-manufacturing, and the boom in 3-D printing. When you combine all the facets of the coming wave, from the design, management, and logistical capabilities of AI to the modeling of chemical reactions enabled by quantum computing to the fine-grained assembly capabilities of robotics, you get a
...more
Such concentrations will enable vast, automated megacorporations to transfer value away from human capital—work—and toward raw capital. Put all the inequalities resulting from concentration together, and it adds up to another great acceleration and structural deepening of an existing fracture. Little wonder there is talk of neo- or techno-feudalism—a direct challenge to the social order, this time built on something beyond even stirrups.
The coming wave presents the disturbing possibility that this may no longer be true. Instead, it could initiate an injection of centralized power and control that will morph state functions into repressive distortions of their original purpose. Rocket fuel for authoritarians and for great power competition alike.
The only step left is bringing these disparate databases together into a single, integrated system: a perfect twenty-first-century surveillance apparatus. The preeminent example is, of course, China. That’s hardly news, but what’s become clear is how advanced and ambitious the party’s program already is, let alone where it might end up in twenty or thirty years.
Around half the world’s billion CCTV cameras are in China. Many have built-in facial recognition and are carefully positioned to gather maximal information, often in quasi-private spaces: residential buildings, hotels, even karaoke lounges. A New York Times investigation found the police in Fujian Province alone estimated they held a database of 2.5 billion facial images. They were candid about its purpose: “controlling and managing people.”
In smart warehouses every micromovement of every worker is tracked down to body temperature and loo breaks. Companies like Vigilant Solutions aggregate movement data based on license plate tracking, then sell it to jurisdictions like state or municipal governments. Even your take-out pizza is being watched: Domino’s uses AI-powered cameras to check its pies. Just as much as anyone in China, those in the West leave a vast data exhaust every day of their lives. And just as in China, it is harvested, processed, operationalized, and sold.
Modern civilization writes checks only continual technological development can cash.
Make no mistake: standstill in itself spells disaster.
Here it seems is the answer, the way out of the dilemma, the key to containment, savior of the nation-state, and of civilization as we know it. Deft regulation, balancing the need to make progress alongside sensible safety constraints, on national and supranational levels, spanning everything from tech giants and militaries to small university research groups and start-ups, tied up in a comprehensive, enforceable framework. We’ve done it before, so the argument goes; look at cars, planes, and medicines. Isn’t this how we manage and contain the coming wave?
At Inflection, for example, we are finding ways to encourage our AI called Pi—for personal intelligence—to be cautious and uncertain by default, and to encourage users to remain critical.
We’re designing Pi to express self-doubt, solicit feedback frequently and constructively, and quickly give way assuming the human, not the machine, is right.
The highest-level challenge, whether in synthetic biology, robotics, or AI, is building a bulletproof off switch, a means of closing down any technology threatening to run out of control. It’s raw common sense to always ensure there is an off switch in any autonomous or powerful system. How to do this with technologies that are as distributed, protean, and far-reaching as in the coming wave—technologies whose precise form isn’t yet clear, technologies that in some cases might actively resist—is an open question. It’s a huge challenge. Do I think it’s possible? Yes—but no one should downplay
...more
Another interesting example is “red teaming”—that is, proactively hunting for flaws in AI models or software systems.
Recent history suggests that for all its global proliferation, technology rests on a few critical R&D and commercialization hubs: choke points. Consider these points of remarkable concentration: Xerox and Apple for interfaces, say, or DARPA and MIT, or Genentech, Monsanto, Stanford, and UCSF for genetic engineering. It’s remarkable how this legacy is only slowly disappearing.
For now, AGI is realistically pursued by a handful of well-resourced groups, most notably DeepMind and OpenAI. Global data traffic travels through a limited number of fiber-optic cables bunched in key pinch points (off the coast of southwest England or Singapore, for example). A crunch on the rare earth elements cobalt, niobium, and tungsten could topple entire industries. Some 80 percent of the high-quality quartz essential to things like photovoltaic panels and silicon chips comes from a single mine in North Carolina.
Corporations traditionally have a single, unequivocal goal: shareholder returns. For the most part, that means the unimpeded development of new technologies. While this has been a powerful engine of progress in history, it’s poorly suited to containment of the coming wave. I believe that figuring out ways to reconcile profit and social purpose in hybrid organizational structures is the best way to navigate the challenges that lie ahead, but making it work in practice is incredibly hard.
Our proposal was to spin DeepMind out as a new form of “global interest company,” with a fully independent board of trustees separate from and in addition to the board of directors tasked with operationally running the company.
Countries need to understand in detail, for example, what data their populations supply, how and where it is used, and what it means; administrations should have a strong sense of the latest research, where the frontier is, where it’s going, how their country can maximize upsides. Above all they need to log all the ways technology causes harm—tabulate every lab leak, every cyberattack, every language model bias, every privacy breach—in a publicly transparent way so everyone can learn from failures and improve.
Certain use cases, like AI for electioneering, should be prohibited by law as part of the package.
Today anyone can build AI. Anyone can set up a lab. We should instead move to a more licensed environment.
The most sophisticated AI systems or synthesizers or quantum computers should be produced only by responsible certified developers.
Just as you cannot simply launch a rocket into space without FAA approval, so tomorrow you shouldn’t simply be able to release a state-of-the-art AI.
increase global transparency at the frontier, asking questions like: Does the system show signs of being able to self-improve capabilities? Can it specify its own goals? Can it acquire more resources without human oversight? Is it deliberately trained in deception or manipulation?
For the most part concerns over technology like those outlined in this book are elite pursuits, nice talking points for the business-class lounge, op-eds for bien-pensant publications, or topics for the presentation halls at Davos or TED.
Some level of policing the internet, DNA synthesizers, AGI research programs, and so on is going to be essential. It’s painful to write. As a young twentysomething, I started out from a privacy maximalist position, believing spaces of communication and work completely free from oversight were foundational rights and important parts of healthy democracy. Over the years, though, as the arguments became clearer and the technology more and more developed, I’ve updated that view. It’s just not acceptable to create situations where the threat of catastrophic outcomes is ever present. Intelligence,
...more
Some measure of anti-proliferation is necessary. And yes, let’s not shy away from the facts; that means real censorship, possibly well beyond national borders.
Ultimately, human beings may no longer be the primary planetary drivers, as we have become accustomed to being.
We are going to live in an epoch when the majority of our daily interactions are not with other people but with AIs. This might sound intriguing or horrifying or absurd, but it is happening. I’m guessing you already spend a sizable portion of your waking hours in front of a screen. Indeed, you may spend more time looking at the collective screens in your life than at any given human, spouses and children included.
The Luddite reaction is natural, expected. But as always, it will be futile.
Technologists should focus not just on the engineering minutiae but on helping to imagine and realize a richer, social, human future in the broadest sense, a complex tapestry of which technology is just one strand.