More on this book
Community
Kindle Notes & Highlights
frontier technology
1.35 million people a year
still die in traffic accidents.
nations are also caught in a contradiction. On the one hand, they are in a strategic competition to accelerate the development of technologies like AI and synthetic biology. Every nation wants to be,
and be seen, at the technological frontier.
On the other hand, they’re desperate to regulate and manage t...
This highlight has been truncated due to consecutive passage length restrictions.
Chinese AI policy has two tracks: a regulated civilian path and a freewheeling military-industrial one.
The reality is that containment is not something that a government, or even a group of governments, can do alone. It requires innovation and boldness in partnering between the public and the private sectors and a completely new set of incentives for
all parties. Regulations like the EU AI Act do at least hint at a world where containment is on the map, one where leading governments take the risks of proliferation seriously, demonstrating new levels of commitment and willingness to make serious sacrifices.
Recall the four features of the coming wave: asymmetry, hyper-evolution, omni-use, and autonomy.
Is the technology omni-use and general-purpose or specific? A nuclear weapon is a highly specific technology with one purpose, whereas a computer is inherently multi-use. The more potential use cases, the more difficult to contain. Rather than general systems, then, those that are more narrowly
scoped and domain specific should be encouraged. Is the tech moving away from
atoms toward bits? The more dematerialized a technology, the more it is subject to hard-to-control hyper-evolutionary effects. Areas like materials design or drug development are going to rapidly accelerate, making the pace of progress harder to track. Are price and compl...
This highlight has been truncated due to consecutive passage length restrictions.
Does the technology enable asymmetric impact? Think of a drone swarm against the conventional military or a tiny computer or biological virus damaging vital social systems. The risk of certain technologies to surprise and exploit vulnerabilities is greater.
Does it have autonomous characteristics? Is there scope for self-learning, or operation without oversight? Think gene
Like climate change, technological risk can only be addressed at planetary scale, but there is no equivalent clarity. There’s no handy metric of risk, no
flooded villages to raise awareness. Obscure research published on arXiv, in cult Substack blogs, or in dry think tank white papers hardly cuts it here.
Either we can grapple with the vast array of good and bad outcomes ignited by our continued openness and heedless chase, or we can confront the dystopian and authoritarian risks arising from our attempts to limit proliferation of powerful technologies, risks moreover inherent in concentrated ownership of those same technologies.
Current elites are so invested in their pessimism aversion that they are afraid to be honest about the dangers we face.
They’re happy to opine and debate in private, less so to come out and talk about it. They are used to a world of control and order: the control of a CEO over a company, of a central banker over interest rates,
of a bureaucrat over military procurement, or of a town planner over which potholes to fix. Their levers of control are imperfect, sure, but they are known, tried, an...
This highlight has been truncated due to consecutive passage length restrictions.
The coming wave really is coming, but it hasn’t wa...
This highlight has been truncated due to consecutive passage length restrictions.
While unstoppable incentives are locked in, the wave’s final form, the precise contours of the dile...
This highlight has been truncated due to consecutive passage length restrictions.
They are about creating a different context for how technology is built and deployed: finding ways of buying time, slowing down,
giving space for more work on the answers, bringing attention, building alliances, furthering technical work.
What these steps might do, however, is change the underlying conditions. Nudge forward the status quo so containment has a chance.
Contact with reality helps developers learn, correct, and improve their safety.
The ultimate control is hard physical control, of servers, microbes, drones, robots, and algorithms. “Boxing” an AI is the original and basic form of technological containment.
This would involve no internet connections, limited human contact, a small, constricted external interface. It would, literally, contain it in physical boxes with a definite location. A system like this—called an air gap—could,
of the wider containment problem. While billions are plowed into robotics, biotech, and AI, comparatively tiny amounts get spent on a technical safety framework equal to keeping them functionally contained. The main monitor of bioweapons, for example, the Biological Weapons Convention, has a budget of just $1.4 million and only four
full-time employees—fewer than the average McDonald’s.
It’s time for an Apollo program on AI safety and biosafety.
Hundreds of thousands should be working on it.
a minimum of 20 percent—of frontier corporate research and development budgets should be directed toward safety efforts, with an obligation to publish material findings to a government worki...
This highlight has been truncated due to consecutive passage length restrictions.
Although numbers are currently small, I know from experience that a groundswell of interest is emerging around these questions. Students and other young people I meet are buzzing about issues like AI alignment and pandemic preparedness. Talk to them and it’s clear the intellectual challenge appeals, but they’re also drawn to the moral imperative. They want to help, and feel a duty to do better. I’m confident that if the
jobs and research programs are there, the talent will follow.
In AI, technical safety also means sandboxes and secure simulations to create provably secure air gaps so that advanced AIs can be rigorously tested before they are given access to the real world.
Explanation is another huge technical safety frontier. Recall that at present no one can explain why, precisely, a model produces the outputs it does.
Devising ways for models to comprehensively explain their decisions or open them to scrutiny has become a critical technical puzzle for safety researchers.
Managing powerful tools itself requires powerful tools.
“provably beneficial AI.”
The highest-level challenge, whether in synthetic biology, robotics, or AI, is building a bulletproof off switch,
Instead, we should identify problems early and then invest more time and resources in the fundamentals. Think big. Create common standards. Safety features should not be afterthoughts but inherent design properties of all these new technologies, the ground state of everything that comes next.
Let’s give them the intellectual oxygen and material support to succeed,
engineering is never the whole answer, it’s a fundam...
This highlight has been truncated due to consecutive passage length restrictions.
A few years ago I co-founded a cross-industry and civil society organization
called the Partnership on AI to help with this kind of work. We launched it with the support of all the major technology companies, including DeepMind, Google, Facebook, Apple, Microsoft, IBM, and OpenAI, along with scores of expert civil society groups, including the ACLU, the EFF, Oxfam, UNDP, and twenty others. Shortly after, it kick-started an AI Incidents Database, designed for confidentially reporting on safety events to share lessons with other developers.
has now collected more than twelve hu...
This highlight has been truncated due to consecutive passage length restrictions.
With more than a hundred partners from nonprofit, academic, and media groups, the partnership offers critical, neutral windows for interdis...
This highlight has been truncated due to consecutive passage length restrictions.
SecureDNA, a not-for-profit program started by a group of scientists and security specialists. At present only a fraction of synthesized DNA is screened for potentially dangerous elements, but a global effort like the SecureDNA program to plug every synthesizer—benchtop at home or

