The Coming Wave: AI, Power, and Our Future
Rate it:
Open Preview
Kindle Notes & Highlights
46%
Flag icon
frontier technology
46%
Flag icon
1.35 million people a year
46%
Flag icon
still die in traffic accidents.
46%
Flag icon
nations are also caught in a contradiction. On the one hand, they are in a strategic competition to accelerate the development of technologies like AI and synthetic biology. Every nation wants to be,
46%
Flag icon
and be seen, at the technological frontier.
46%
Flag icon
On the other hand, they’re desperate to regulate and manage t...
This highlight has been truncated due to consecutive passage length restrictions.
46%
Flag icon
Chinese AI policy has two tracks: a regulated civilian path and a freewheeling military-industrial one.
46%
Flag icon
The reality is that containment is not something that a government, or even a group of governments, can do alone. It requires innovation and boldness in partnering between the public and the private sectors and a completely new set of incentives for
46%
Flag icon
all parties. Regulations like the EU AI Act do at least hint at a world where containment is on the map, one where leading governments take the risks of proliferation seriously, demonstrating new levels of commitment and willingness to make serious sacrifices.
46%
Flag icon
Recall the four features of the coming wave: asymmetry, hyper-evolution, omni-use, and autonomy.
46%
Flag icon
Is the technology omni-use and general-purpose or specific? A nuclear weapon is a highly specific technology with one purpose, whereas a computer is inherently multi-use. The more potential use cases, the more difficult to contain. Rather than general systems, then, those that are more narrowly
46%
Flag icon
scoped and domain specific should be encouraged. Is the tech moving away from
46%
Flag icon
atoms toward bits? The more dematerialized a technology, the more it is subject to hard-to-control hyper-evolutionary effects. Areas like materials design or drug development are going to rapidly accelerate, making the pace of progress harder to track. Are price and compl...
This highlight has been truncated due to consecutive passage length restrictions.
47%
Flag icon
Does the technology enable asymmetric impact? Think of a drone swarm against the conventional military or a tiny computer or biological virus damaging vital social systems. The risk of certain technologies to surprise and exploit vulnerabilities is greater.
47%
Flag icon
Does it have autonomous characteristics? Is there scope for self-learning, or operation without oversight? Think gene
47%
Flag icon
Like climate change, technological risk can only be addressed at planetary scale, but there is no equivalent clarity. There’s no handy metric of risk, no
47%
Flag icon
flooded villages to raise awareness. Obscure research published on arXiv, in cult Substack blogs, or in dry think tank white papers hardly cuts it here.
47%
Flag icon
Either we can grapple with the vast array of good and bad outcomes ignited by our continued openness and heedless chase, or we can confront the dystopian and authoritarian risks arising from our attempts to limit proliferation of powerful technologies, risks moreover inherent in concentrated ownership of those same technologies.
47%
Flag icon
Current elites are so invested in their pessimism aversion that they are afraid to be honest about the dangers we face.
47%
Flag icon
They’re happy to opine and debate in private, less so to come out and talk about it. They are used to a world of control and order: the control of a CEO over a company, of a central banker over interest rates,
47%
Flag icon
of a bureaucrat over military procurement, or of a town planner over which potholes to fix. Their levers of control are imperfect, sure, but they are known, tried, an...
This highlight has been truncated due to consecutive passage length restrictions.
47%
Flag icon
The coming wave really is coming, but it hasn’t wa...
This highlight has been truncated due to consecutive passage length restrictions.
47%
Flag icon
While unstoppable incentives are locked in, the wave’s final form, the precise contours of the dile...
This highlight has been truncated due to consecutive passage length restrictions.
47%
Flag icon
They are about creating a different context for how technology is built and deployed: finding ways of buying time, slowing down,
47%
Flag icon
giving space for more work on the answers, bringing attention, building alliances, furthering technical work.
47%
Flag icon
What these steps might do, however, is change the underlying conditions. Nudge forward the status quo so containment has a chance.
48%
Flag icon
Contact with reality helps developers learn, correct, and improve their safety.
48%
Flag icon
The ultimate control is hard physical control, of servers, microbes, drones, robots, and algorithms. “Boxing” an AI is the original and basic form of technological containment.
48%
Flag icon
This would involve no internet connections, limited human contact, a small, constricted external interface. It would, literally, contain it in physical boxes with a definite location. A system like this—called an air gap—could,
48%
Flag icon
of the wider containment problem. While billions are plowed into robotics, biotech, and AI, comparatively tiny amounts get spent on a technical safety framework equal to keeping them functionally contained. The main monitor of bioweapons, for example, the Biological Weapons Convention, has a budget of just $1.4 million and only four
48%
Flag icon
full-time employees—fewer than the average McDonald’s.
48%
Flag icon
It’s time for an Apollo program on AI safety and biosafety.
48%
Flag icon
Hundreds of thousands should be working on it.
48%
Flag icon
a minimum of 20 percent—of frontier corporate research and development budgets should be directed toward safety efforts, with an obligation to publish material findings to a government worki...
This highlight has been truncated due to consecutive passage length restrictions.
48%
Flag icon
Although numbers are currently small, I know from experience that a groundswell of interest is emerging around these questions. Students and other young people I meet are buzzing about issues like AI alignment and pandemic preparedness. Talk to them and it’s clear the intellectual challenge appeals, but they’re also drawn to the moral imperative. They want to help, and feel a duty to do better. I’m confident that if the
48%
Flag icon
jobs and research programs are there, the talent will follow.
48%
Flag icon
In AI, technical safety also means sandboxes and secure simulations to create provably secure air gaps so that advanced AIs can be rigorously tested before they are given access to the real world.
48%
Flag icon
Explanation is another huge technical safety frontier. Recall that at present no one can explain why, precisely, a model produces the outputs it does.
48%
Flag icon
Devising ways for models to comprehensively explain their decisions or open them to scrutiny has become a critical technical puzzle for safety researchers.
48%
Flag icon
Managing powerful tools itself requires powerful tools.
48%
Flag icon
“provably beneficial AI.”
49%
Flag icon
The highest-level challenge, whether in synthetic biology, robotics, or AI, is building a bulletproof off switch,
49%
Flag icon
Instead, we should identify problems early and then invest more time and resources in the fundamentals. Think big. Create common standards. Safety features should not be afterthoughts but inherent design properties of all these new technologies, the ground state of everything that comes next.
49%
Flag icon
Let’s give them the intellectual oxygen and material support to succeed,
49%
Flag icon
engineering is never the whole answer, it’s a fundam...
This highlight has been truncated due to consecutive passage length restrictions.
49%
Flag icon
A few years ago I co-founded a cross-industry and civil society organization
49%
Flag icon
called the Partnership on AI to help with this kind of work. We launched it with the support of all the major technology companies, including DeepMind, Google, Facebook, Apple, Microsoft, IBM, and OpenAI, along with scores of expert civil society groups, including the ACLU, the EFF, Oxfam, UNDP, and twenty others. Shortly after, it kick-started an AI Incidents Database, designed for confidentially reporting on safety events to share lessons with other developers.
49%
Flag icon
has now collected more than twelve hu...
This highlight has been truncated due to consecutive passage length restrictions.
49%
Flag icon
With more than a hundred partners from nonprofit, academic, and media groups, the partnership offers critical, neutral windows for interdis...
This highlight has been truncated due to consecutive passage length restrictions.
49%
Flag icon
SecureDNA, a not-for-profit program started by a group of scientists and security specialists. At present only a fraction of synthesized DNA is screened for potentially dangerous elements, but a global effort like the SecureDNA program to plug every synthesizer—benchtop at home or