The Coming Wave: AI, Power, and Our Future
Rate it:
Open Preview
Kindle Notes & Highlights
Read between January 18 - February 25, 2024
48%
Flag icon
Concretely, a good proposal for legislation would be to require that a fixed portion—say, a minimum of 20 percent—of frontier corporate research and development budgets should be directed toward safety efforts, with an obligation to publish material findings to a government working group so that progress can be tracked and shared.
48%
Flag icon
groundswell
48%
Flag icon
In AI, technical safety also means sandboxes and secure simulations to create provably secure air gaps so that advanced AIs can be rigorously tested before they are given access to the real world. It means much more work on uncertainty, a major focus right now—that is, how does an AI communicate when it might be wrong?
48%
Flag icon
“corrigibility,”
49%
Flag icon
The highest-level challenge, whether in synthetic biology, robotics, or AI, is building a bulletproof off switch, a means of closing down any technology threatening to run out of control. It’s raw common sense to always ensure there is an off switch in any autonomous or powerful system. How to do this with technologies that are as distributed, protean, and far-reaching as in the coming wave—technologies whose precise form isn’t yet clear, technologies that in some cases might actively resist—is an open question. It’s a huge challenge. Do I think it’s possible? Yes—but no one should downplay ...more
49%
Flag icon
Safety features should not be afterthoughts but inherent design properties of all these new technologies, the ground state of everything that comes next.
49%
Flag icon
Trust comes from transparency. We absolutely need to be able to verify, at every level, the safety, integrity, or uncompromised nature of a system.
49%
Flag icon
white hat hackers
49%
Flag icon
External scrutiny is essential. Right now there’s no global, formal, or routine effort to test deployed systems.
49%
Flag icon
Partnership on AI
49%
Flag icon
with the support of all the major technology companies,
49%
Flag icon
Shortly after, it kick-started an AI Incidents Database, designed for confidentially reporting on safety events to share lessons with other developers.
49%
Flag icon
“red teaming”—that is, proactively hunting for flaws in AI models or software systems. This means attacking your systems in controlled ways to probe for weaknesses and other failure modes.
49%
Flag icon
It’s also time to create government-funded red teams that would rigorously attack and stress test every system, ensuring that insights discovered along the way are shared widely across the industry.
49%
Flag icon
panopticon.
49%
Flag icon
targeted oversight mechanisms, what some researchers have called “scalable supervision” of “systems that potentially outperform us on most skills relevant to the task at hand.”
49%
Flag icon
SecureDNA, a not-for-profit program started by a group of scientists and security specialists.
49%
Flag icon
Transparency cannot be optional. There has to be a well-defined, legal route to checking any new technology under the hood, in the code, in the lab, in the factory, or out in the wild.
49%
Flag icon
the Chinese leadership, but having pinned its long-term strategy on dominance of the coming wave, it was admitting an acute vulnerability.
50%
Flag icon
Leading American chip companies like NVIDIA and AMD can no longer supply Chinese customers with the means and know-how to produce the world’s most advanced chips. U.S. citizens working on semiconductors with Chinese companies are faced with a choice: keep their jobs and lose American citizenship, or immediately quit.
50%
Flag icon
Klaxon in Zhongnanhai, the Chinese leadership compound,
50%
Flag icon
In AI, the lion’s share of the most advanced GPUs essential to the latest models are designed by one company, the American firm NVIDIA.
50%
Flag icon
Most of its chips are manufactured by one company, TSMC, in Taiwan, the most advanced in just a single building, the world’s most sophisticated and expensive factory.
50%
Flag icon
TSMC’s machinery to make these chips comes from a single supplier, the Dutch firm ASML, by far Europe’s most valu...
This highlight has been truncated due to consecutive passage length restrictions.
50%
Flag icon
Industrial-scale cloud computing, too, is dominated by six major companies.
50%
Flag icon
Some 80 percent of the high-quality quartz essential to things like photovoltaic panels and silicon chips comes from a single mine in North Carolina.
50%
Flag icon
It’s impossible not to recognize some of the paradoxes. It means people like me have to face the prospect that alongside trying to build positive tools and forestall bad outcomes, we may inadvertently accelerate the very things we’re trying to avoid,
50%
Flag icon
I don’t have all the answers. I constantly question my choices.
51%
Flag icon
unimpeded
51%
Flag icon
to ensure that unprecedented technology was matched by unprecedented governance.
51%
Flag icon
Transparency, accountability, ethics—these would be not just corporate PR but foundational, legally binding, and built into everything the company did.
51%
Flag icon
ensured that social purpose was in the company’s legal DNA.
51%
Flag icon
a foundational lesson for me: shareholder capitalism works because it is simple and clear, and governance models too have a tendency to default to the simple and clear.
51%
Flag icon
fringe
51%
Flag icon
The need to balance profits with a positive contribution and cutting-edge safety
51%
Flag icon
Technological problems require technological solutions, as we’ve seen, but alone they are never sufficient.
51%
Flag icon
Richard Feynman famously said, “What I cannot create, I do not understand.”
52%
Flag icon
governments stand a better chance of steering it toward the overall public interest.
52%
Flag icon
Regulation alone doesn’t get us to containment, but any discussion that doesn’t involve regulation is doomed.
52%
Flag icon
The OECD AI Policy Observatory
52%
Flag icon
AI Bill of Rights with five core principles “to help guide the design, development, and deployment of artificial intelligence and other automated systems so that they protect the rights of the American public.”
52%
Flag icon
The more general a model, the more likely it is to pose a serious threat.
52%
Flag icon
technology creates losers, they need material compensation. Today U.S. labor is taxed at an average rate of 25 percent, equipment and software at just 5 percent. The system is designed to let capital frictionlessly reproduce itself in the name of creating flourishing businesses. In the future, taxation needs to switch emphasis toward capital,
52%
Flag icon
“tax on robots”;
52%
Flag icon
universal basic income (UBI)—that
52%
Flag icon
genuine AGI cannot be privately owned in the same manner as, say, a building or a fleet of trucks.
52%
Flag icon
Use of blinding laser weapons was outlawed under the 1995 Protocol on Blinding Laser Weapons,
52%
Flag icon
a strong ban can work.
52%
Flag icon
Consider these examples,
52%
Flag icon
the Treaty on the Non-proliferation of Nuclear Weapons; the Montreal Protocol outlawing CFCs; the invention, trialing, and rollout of a polio vaccine across a Cold War divide; the Biological Weapons Convention, a disarmament treaty effectively banning biological weapons; bans on cluster munitions, land mines, genetic editing of human beings, and eugenics policies; the Paris Agreement, aiming to limit carbon emissions and the worst impacts of climate change; the global effort to eradicate smallpox; phasing out lead in gasoline; and putting an end to asbestos.