More on this book
Community
Kindle Notes & Highlights
With AI, we could unlock the secrets of the universe, cure diseases that have long eluded us, and create new forms of art and culture that stretch the bounds of imagination. With biotechnology, we could engineer life to tackle diseases and transform agriculture, creating a world that is healthier and more sustainable.
The rise and spread of technologies has also taken the form of world-changing waves. A single overriding trend has stood the test of time since the discovery of fire and stone tools, the first technologies harnessed by our species. Almost every foundational technology ever invented, from pickaxes to plows, pottery to photography, phones to planes, and everything in between, follows a single, seemingly immutable law: it gets cheaper and easier to use, and ultimately it proliferates, far and wide.
To achieve this objective, we would need to create a system that could imitate and then eventually outperform all human cognitive abilities, from vision and speech to planning and imagination, and ultimately empathy and creativity.
AI is everywhere, on the news and in your smartphone, trading stocks and building websites.
A few years after we founded DeepMind, I created a slide deck about AI’s potential long-term economic and social impacts.
One stood out. The presenter showed how the price of DNA synthesizers, which can print bespoke strands of DNA, was falling rapidly. Costing a few tens of thousands of dollars, they are small enough to sit on a bench in your garage and let people synthesize—that is, manufacture—DNA. And all this is now possible for anyone with graduate-level training in biology or an enthusiasm for self-directed learning online.
This was not science fiction, argued the presenter, a respected professor with more than two decades of experience; it was a live risk, now. They finished with an alarming thought: a single person today likely “has the capacity to kill a billion people.” All it takes is motivation.
Pessimism aversion is an emotional response, an ingrained gut refusal to accept the possibility of seriously destabilizing outcomes.
Then something remarkable happened. DQN appeared to discover a new, and very clever, strategy. Instead of simply knocking out bricks steadily, row by row, DQN began targeting a single column of bricks. The result was the creation of an efficient route up to the back of the block of bricks. DQN had tunneled all the way to the top, creating a path that then enabled the ball to simply bounce off the back wall, steadily destroying the entire set of bricks like a frenzied ball in a pinball machine. The method earned the maximum score with minimum effort. It was an uncanny tactic, not unknown to
...more
It’s often said that there are more potential configurations of a Go board than there are atoms in the known universe; one million trillion trillion trillion trillion more configurations in fact! With so many possibilities, traditional approaches stood no chance. When IBM’s Deep Blue beat Garry Kasparov at chess in 1997, it used the so-called brute-force technique, where an algorithm aims to systematically crunch through as many possible moves as it can. That approach is hopeless in a game with as many branching outcomes as Go.
Later versions of the software like AlphaZero dispensed with any prior human knowledge. The system simply trained on its own, playing itself millions of times over, learning from scratch to reach a level of performance that trounced the original AlphaGo without any of the received wisdom or input of human players. In other words, with just a day’s training, AlphaZero was capable of learning more about the game than the entirety of human experience could teach it.
AI, synthetic biology, robotics, and quantum computing can sound like a parade of overhyped buzzwords. Skeptics abound. All of these terms have been batted around popular tech discourse for decades. And progress has often been slower than advertised.
Shortly after DQN, we sold DeepMind to Google, and the tech giant soon switched to a strategy of “AI first” across all its products.
AI is becoming much easier to access and use: tools and infrastructure like Meta’s PyTorch or OpenAI’s application programming interfaces (APIs) help put state-of-the-art machine learning capabilities in the hands of nonspecialists. 5G and ubiquitous connectivity create a massive, always-on user base.
AI systems run retail warehouses, suggest how to write emails or what songs you might like, detect fraud, write stories, diagnose rare conditions, and simulate the impact of climate change. They feature in shops, schools, hospitals, offices, courts, and homes. You already interact many times a day with AI; soon it will be many more, and almost everywhere it will make experiences more efficient, faster, more useful, and frictionless. AI is already here. But it’s far from done.
LLMs take advantage of the fact that language data comes in a sequential order. Each unit of information is in some way related to data earlier in a series. The model reads very large numbers of sentences, learns an abstract representation of the information contained within them, and then, based on this, generates a prediction about what should come next. The challenge lies in designing an algorithm that “knows where to look” for signals in a given sentence.
It’s worth noting that humans do this with words of course, but the model doesn’t use our vocabulary.
To get a sense of one petaFLOP, imagine a billion people each holding a million calculators, doing a complex multiplication, and hitting “equals” at the same time. I find this extraordinary. Not long ago, language models struggled to produce coherent sentences. This is far, far beyond Moore’s law or indeed any other technology trajectory I can think of. No wonder capabilities are growing.
I think of this as “artificial capable intelligence” (ACI), the point at which AI can achieve complex goals and tasks with minimal oversight. AI and AGI are both parts of the everyday discussion, but we need a concept encapsulating a middle layer in which the Modern Turing Test is achieved but before systems display runaway “superintelligence.” ACI is shorthand for this point.
There will be thousands of these models, and they will be used by the majority of the world’s population. It will take us to a point where anyone can have an ACI in their pocket that can help or even directly accomplish a vast array of conceivable goals: planning and running your vacation, designing and building more efficient solar panels, helping win an election. It’s hard to say for certain what happens when everyone is empowered like this, but this is a point we’ll return to in part 3.
Despite some notable achievements, initial progress in the field was slow, because genetic engineering was a costly, difficult process prone to failure. Over the last twenty or so years, however, that has changed. Genetic engineering has gotten much cheaper and much easier. (Sound familiar?) One catalyst was the Human Genome Project. This was a thirteen-year, multibillion-dollar endeavor that gathered together thousands of scientists from across the world, in private and public institutions, with a single goal: unlocking the three billion letters of genetic information making up the human
...more
While Moore’s law justifiably attracts considerable attention, less well known is what The Economist calls the Carlson curve: the epic collapse in costs for sequencing DNA. Thanks to ever-improving techniques, the cost of human genome sequencing fell from $1 billion in 2003 to well under $1,000 by 2022. That is, the price dropped a millionfold in under twenty years, a thousand times faster than Moore’s law. A stunning development hiding in plain sight.
You can now buy a benchtop DNA synthesizer (see the next section) for as little as $25,000 and use it as you wish, without restriction or oversight, at home in your bio-garage.
This is the promise of evolution by design, tens of millions of years of history compressed and short-circuited by directed intervention. It brings together biotechnology, molecular biology, and genetics with the power of computational design tools.
Using a gene for light-detecting proteins taken from algae to rebuild nerve cells, scientists successfully restored limited vision to a blind man in 2021.
A world where life spans are set to average a hundred years or more is achievable in the next decades. Nor is this just about longer life; it’s about healthier lives as we get older.
DNA is itself the most efficient data storage mechanism we know of—capable of storing data at millions of times the density of current computational techniques with near-perfect fidelity and stability.
General-purpose technologies are accelerants. Invention sparks invention. Waves lay the ground for further scientific and technological experimentation, nudging open the doors of possibility.
In 2019, Google announced that it had reached “quantum supremacy.” Researchers had built a quantum computer, one using the peculiar properties of the subatomic world. Chilled to a temperature colder than the coldest parts of outer space, Google’s machine used an understanding of quantum mechanics to complete a calculation in seconds that would, it said, have taken a conventional computer ten thousand years. It had just fifty-three “qubits,” or quantum bits, the core units of quantum computing. To store equivalent information on a classical computer, you would need seventy-two billion gigabytes
...more
Renewable energy will become the largest single source of electricity generation by 2027.
A precision missile in a conventional military costs hundreds of thousands of dollars; with AI and consumer-grade drones, with custom software and 3-D printed parts, something similar has now been battle-tested in Ukraine at a cost of around $15,000.
A single AI program can write as much text as all of humanity.
The next forty years will see both the world of atoms rendered into bits at new levels of complexity and fidelity and, crucially, the world of bits rendered back into tangible atoms with a speed and ease unthinkable until recently.
Nobody hand codes GPT-4 to write like Jane Austen, or produce an original haiku, or generate marketing copy for a website selling bicycles. These features are emergent effects of a wider architecture whose outputs are never decided in advance by its designers.
You can’t walk someone through the decision-making process to explain precisely why an algorithm produced a specific prediction. Engineers can’t peer beneath the hood and easily explain what caused something to happen.
Right at the cutting edge, however, some AI researchers want to automate every aspect of building AI systems, feeding that hyper-evolution, but potentially with radical degrees of independence through self-improvement.
If Seoul offered a hint, Wuzhen brought it home. As the dust settled, it became clear AlphaGo was part of a much bigger story than one trophy, system, or company; it was that of great powers engaging in a new and dangerous game of technological competition—and a series of overwhelmingly powerful and interlocking incentives that ensure the coming wave really is coming.
Earlier we saw that no wave of technology has, so far, been contained. In this chapter we look at why history is likely to repeat itself; why, thanks to a series of macro-drivers behind technologies’ development and spread, the fruit will not be left on the tree; why the wave will break. As long as these incentives are in place, the important question of “should we?” is moot.
Technology has become the world’s most important strategic asset, not so much the instrument of foreign policy as the driver of it.
At DeepMind, I always pushed back on references to us as a Manhattan Project for AI, not just because of the nuclear comparison, but because even the framing might initiate a series of other Manhattan Projects, feeding an arms race dynamic when close global coordination, break points, and slowdowns were needed.
Because these technologies are getting cheaper and simpler to use even as they get more powerful, more nations can engage at the frontier.
Obscure work done by a computer science grad student one year might be in the hands of hundreds of millions of users the next. That makes it hard to predict or control. Sure, tech companies want to keep their secrets, but they also tend to abide by the open philosophies characterizing software development and academia. Innovations diffuse far faster and further and more disruptively as a result.
Worldwide R&D spending is at well over $700 billion annually, hitting record highs. Amazon’s R&D budget alone is $78 billion, which would be the ninth biggest in the world if it were a country.
It’s ultimately luck that demand for photorealistic gaming meant companies like NVIDIA invested so much into making better hardware, and that this then adapted so well to machine learning.
PwC forecasts AI will add $15.7 trillion to the global economy by 2030. McKinsey forecasts a $4 trillion boost from biotech over the same period.
Instead of just consuming content, anyone can produce expert-quality video, image, and text content. AI doesn’t just help you find information for that best man speech; it will write the speech, too. And all on a scale unseen before. Robots won’t just manufacture cars and organize warehouse floors; they’ll be available to every garage tinkerer with a little time and imagination. The past wave enabled us to sequence, or read, DNA. The coming wave will make DNA synthesis universally available.
Today, no matter how wealthy you are, you simply cannot buy a more powerful smartphone than is available to billions of people. This phenomenal achievement of civilization is too often overlooked. In the next decade, access to ACIs will follow the same trend. Those same billions will soon have broadly equal access to the best lawyer, doctor, strategist, designer, coach, executive assistant, negotiator, and so on. Everyone will have a world-class team on their side and in their corner.
The threat here lies not so much with extreme cases as in subtle, nuanced, and highly plausible scenarios being exaggerated and distorted. It’s not the president charging into a school screaming nonsensical rubbish while hurling grenades; it’s the president resignedly saying he has no choice but to institute a set of emergency laws or reintroduce the draft. It’s not Hollywood fireworks; it’s the purported surveillance camera footage of a group of white policemen caught on tape beating a Black man to death.
Gain-of-function research is, suffice to say, controversial. For a time U.S. funding agencies imposed a moratorium on funding it. In a classic failure of containment, such work resumed in 2019. There is at least some indication that COVID-19 has been genetically altered and a growing body of (circumstantial) evidence, from the Wuhan Institute’s track record to the molecular biology of the virus itself, suggesting a lab leak might have been the origin of the pandemic.
Both the FBI and the U.S. Department of Energy believe this to be the case, with the CIA undecided.