More on this book
Community
Kindle Notes & Highlights
some jurisdictions are far more permissive about human experimentation than others, where pockets of advanced bio-capabilities and self-modification produce divergent outcomes at the level of DNA, which in turn produce divergent outcomes at the levels of states and microstates. There could then be something like a biohacking personal enhancement arms race.
The old social contract gets ripped to pieces. Institutions are bypassed, undermined, superseded. Taxation, law enforcement, compliance with norms: all under threat. In this scenario rapid fragmentation of power could accelerate a kind of “turbo-balkanization” that gives nimble and newly capable actors unprecedented freedom to operate.
Something more like the pre-nation-state world emerges in this scenario, neo-medieval, smaller, more local, and constitutionally diverse, a complex, unstable patchwork of polities. Only this time with hugely powerful technology.
Hyper-libertarian technologists like the PayPal founder and venture capitalist Peter Thiel celebrate a vision of the state withering away, seeing this as liberation for an overmighty species of business leaders or “sovereign individuals,” as they call themselves.
This is a world where billionaires and latter-day prophets can build and run microstates; where non-state actors from corporations to communes to algorithms begin to overshadow the state from above but also from below.
Understanding the future means handling multiple conflicting trajectories at once. The
coming wave launches immense centralizing and
decentralizing riptides at th...
This highlight has been truncated due to consecutive passage length restrictions.
Every individual, every business, every church, every nonprofit, every nation, will eventually have its own AI and ultimately its own bio and robotics capabi...
This highlight has been truncated due to consecutive passage length restrictions.
Each new formulation of power will offer a different vision of delivering
public goods, or propose a different way to make products or a different set of religious beliefs to evangelize.
What happens if the state can no longer control, in a balanced fashion, the coming wave?
World War I killed around 1 percent of the global population; World War II, 3 percent.
Over time, then, the implications of these technologies will push humanity to navigate a path between
the poles of catastrophe and dystopia.
This is the essential dilemma...
This highlight has been truncated due to consecutive passage length restrictions.
Steadily, many nations will convince themselves that the only way of truly ensuring this is to install the kind of blanket surveillance
we saw in the last chapter: total control, backed by hard power. The door to dystopia is cracked open. Indeed, in the face of catastrophe, for some dystopia may feel like a relief.
A cataclysm would galvanize calls for an extreme surveillance apparatus to stop future such events. If or when something goes wrong with technology, how long before the crackdown starts? How
Trading off liberty and security is an ancient dilemma. It was there in the foundational account of the Leviathan
state from Thomas Hobbes. It has never gone away. To be sure, this is often a complex and multidimensional relationship, but the coming wave raises the stakes to a new pitch. What level of societal control is appropriate to stopping an engineered pandemic? What level of interference in other
Lewis Mumford talked about the “megamachine,” where social systems combine with technologies to form “a uniform, all-enveloping structure” that is “controlled for the benefit of depersonalized collective organizations.”
Even though the drivers behind it seem so great and immovable, should humanity get off the train? Should we reject continual technological development altogether? Might it be time, however improbable, to have a moratorium on technology itself?
Civilizations that collapse are not the exception; they are the rule. A survey of sixty civilizations suggests they last about four hundred years on average before falling apart. Without new technologies, they hit hard limits to development—in available energy, in food, in social complexity—that bring them crashing down.
Modern civilization writes checks only continual technological development can cash. Our entire edifice is premised on the idea of long-term economic growth.
And long-term economic growth is ultimately premised on the introduction and diffusion of new technologies.
it’s the expectation of consuming more for less or getting ever more public service without paying more tax, or the idea that we can unsustainably degrade the environment while life keeps getting better indefinitely, the ...
This highlight has been truncated due to consecutive passage length restrictions.
Demand for lithium, cobalt, and graphite is set to rise 500 percent by 2030. Currently
Given the population and resource constraints, just standing still would probably require a global two- to threefold productivity improvement, and standing still is not acceptable for the world’s vast majority, among whom, for example, child mortality is twelve times higher than in developed countries. Of
In 1955, toward the end of his life, the mathematician John von Neumann wrote an essay called “Can We Survive Technology?”
“in a rapidly maturing crisis—a crisis attributable to the fact that the environment in which technological progress must occur has become both undersized and underorganized.”
“For progress there is no cure,” he writes. “Any attempt to find automatically safe channels for the present explosive variety of progress must lead to frustration.”
My profound worry is that technology is demonstrating the real possibility to sharply move net negative, that we don’t have answers to arrest this shift, and that we’re locked in with no way out.
I am, however, confident that the coming decades will see complex, painful trade-offs between prosperity, surveillance, and the threat of catastrophe growing
ever more acute. Even a system of states in the best possible health would struggle.
Our great-grandparents would be astonished at the abundance of our world. But they would also be astonished at its fragility and perils.
Technology is the best and worst of us. There isn’t a neat one-sided approach that does it justice. The only coherent approach to technology is to see both sides at the same time.
Exponential change is coming. It is inevitable. That fact needs to be addressed.
Convening a White House roundtable and delivering earnest speeches are easy; enacting effective legislation is a different proposition. As we’ve seen, governments face multiple crises independent of the coming wave—declining trust, entrenched inequality, polarized politics, to name a few. They’re overstretched, their workforces under-skilled and unprepared for the kinds of complex and fast-moving challenges that lie ahead.
Talking about the
ethics of machine learning systems is a world away from, say, the technical safety of synthetic bio. These discussions happen in isolated, echoey silos. They rarely break out.
Right now, scattered insights are all we’ve got: hundreds of distinct programs across distant parts of the technosphere, chipping away at well-meaning but ad hoc efforts without an overarching plan or direction.
we need a clear and simple goal, a banner imperative integrating all the different efforts around technology into a coherent package.
The central problem for humanity in the twenty-first century is how we
can nurture sufficient legitimate political power and wisdom, adequate technical mastery, and robust norms to constrain technologies to ensure they continue to do far more good than harm. How, in other words, we can contain the seemingly uncontainable.
Going into 2020, the Global Health Security Index ranked the United States number one in the world and the U.K. not far behind in terms of pandemic readiness. Yet a catalog of disastrous decisions delivered mortality rates and financial costs materially worse than in peer countries like Canada and Germany. Despite what looked like excellent
expertise, institutional depth, planning, and resources, even those best prepared on paper were sideswiped.
Governments fight the last war, the last pandemic, regulate the last wave. Regulators regulate for things they can anticipate. This, meanwhile, is an age of surprises.
the EU’s AI Act, first proposed in 2021. As of this writing in 2023, the act is going through the lengthy process of becoming European law. If it is enacted, AI research and deployment will be categorized on a risk-based scale. Technologies with “unacceptable risk” of causing direct harm will be prohibited. Where AI affects fundamental human rights or critical systems like basic infrastructure, public transport, health, or welfare, it will get classed as “high risk,”
High-risk AI must be “transparent, secure, subject to human control and properly documented.”

