The Precipice: ‘A book that seems made for the present moment’ New Yorker
Rate it:
Kindle Notes & Highlights
25%
Flag icon
First, it is profoundly underfunded. This global convention to protect humanity has just four employees, and a smaller budget than an average McDonald’s.
26%
Flag icon
INFORMATION HAZARDS
26%
Flag icon
It is not just pathogens that can escape the lab. The most dangerous escapes thus far are not microbes, but information; not biohazards, but information hazards.64
26%
Flag icon
UNALIGNED ARTIFICIAL INTELLIGENCE
27%
Flag icon
Could developments in AI pose a risk on this largest scale?
27%
Flag icon
Asked when an AI system would be ‘able to accomplish every task better and more cheaply than human workers’, on average they estimated a 50 percent chance of this happening by 2061 and a 10 percent chance of it happening as soon as 2025.85
27%
Flag icon
FIGURE 5.1 Measures of progress and interest in artificial intelligence. The faces show the very rapid recent progress in generating realistic images of ‘imagined’ people. The charts show longterm progress in chess AI surpassing the best human grand masters (measured in Elo), as well as the recent rise in academic activity in the field—measured by papers posted on arXiv, and attendance at conferences.86
29%
Flag icon
In the words of Demis Hassabis, co-founder of DeepMind:
29%
Flag icon
We need to use the downtime, when things are calm, to prepare for when things get serious in the decades to come. The time we have now is valuable, and we need to make use of it.115
29%
Flag icon
DYSTOPIAN SCENARIOS
29%
Flag icon
So far we have focused on two kinds of existential catastrophe: extinction and the unrecoverable collapse of civilisation. But these are not the only possibilities.
29%
Flag icon
This could be a world without humans (extinction) or a world without civilisation (unrecoverable collapse). But it could also take the form of an unrecoverable dystopia—a world with civilisation intact, but locked into a terrible form, with little or no value.116
29%
Flag icon
We can divide the unrecoverable dystopias we might face into three types,
29%
Flag icon
There are possibilities where the people don’t want that world, yet the structure of society makes it almost impossible for them to coordinate to change it. There are possibilities where the people do want that world, yet they are misguided and the world falls far short of what they could have achieved. And in between there are possibilities where only a small group wants that world but enforces it against the wishes of the rest. Each of these types has different hurdles it would need to overcome in order to become truly locked in.
29%
Flag icon
FIGURE 5.2 An extended classification of existential catastrophes by the kind of out...
This highlight has been truncated due to consecutive passage length restrictions.
29%
Flag icon
Note that to count as existential catastrophes, these outcomes don’t need to be impossible to break out of, nor to last millions of years. Instead, the defining feature is that entering that regime was a crucial negative turning point in the history of human...
This highlight has been truncated due to consecutive passage length restrictions.
30%
Flag icon
OTHER RISKS
30%
Flag icon
One of the most transformative technologies that might be developed this century is nanotechnology.
30%
Flag icon
UNFORESEEN RISKS
30%
Flag icon
Nick Bostrom has recently pointed to an important class of unforeseen risk.138 Every year as we invent new technologies, we may have a chance of stumbling across something that offers the destructive power of the atomic bomb or a deadly pandemic, but which turns out to be easy to produce from everyday materials. Discovering even one such technology might be enough to make the continued existence of human civilisation impossible.
31%
Flag icon
PART THREE THE PATH FORWARD
31%
Flag icon
6 THE RISK LANDSCAPE A new type of thinking is essential if mankind is to survive and move toward higher levels. —Albert Einstein
31%
Flag icon
QUANTIFYING THE RISKS
31%
Flag icon
The numbers represent my overall degrees of belief that each of the catastrophes will befall us this century. This means they aren’t simply an encapsulation of the information and argumentation in the chapters on the risks. Instead, they rely on an accumulation of knowledge and judgement on each risk that goes beyond what can be distilled into a few pages. They are not in any way a final word, but are a concise summary of all I know about the risk landscape.
31%
Flag icon
Existential catastrophe via Chance within next 100 years
32%
Flag icon
ANATOMY OF AN EXTINCTION RISK
32%
Flag icon
My colleagues at the Future of Humanity Institute have suggested classifying risks of human extinction by the three successive stages that need to occur before we would go extinct:
32%
Flag icon
Origin: How does the catastrophe get started?
32%
Flag icon
Some are initiated by the natural environment, while others are anthropogenic. We can usefully break anthropogenic risks down according to whether the harm was intended, foreseen or unforeseen. And we can further break these down by whether they involve a small number of actors (such as accidents or terrorism) or a large number (such as climate change or nuclear war).
32%
Flag icon
Scaling: How does the catastrophe reach a...
This highlight has been truncated due to consecutive passage length restrictions.
32%
Flag icon
It could start at a global scale (such as a climate change) or there could be a mechanism that scales it up. For example, the sunlight-blocking particles from asteroids, volcanoes and nuclear war get spread across the world by the Earth’s atmospheric circulation while pandemics are scal...
This highlight has been truncated due to consecutive passage length restrictions.
32%
Flag icon
Endgame: How does the catastrophe fi...
This highlight has been truncated due to consecutive passage length restrictions.
32%
Flag icon
How does it kill everyone, wherever they are? Like the dust kicked up by an asteroid, the lethal substance could have spread everywhere in the environment; like a pandemic it could be carried by people wherever people go; or in an intentional plan to cause extinctio...
This highlight has been truncated due to consecutive passage length restrictions.
32%
Flag icon
32%
Flag icon
This classification lets us break down the probability of extinction into the product of (1) the probability it gets started, (2) the probability it reaches a global scale given it gets started, and (3) the probability it causes extinction given it reaches a global scale:
32%
Flag icon
pextinction = porigin × pscaling × pendgame
32%
Flag icon
COMBINING AND COMPARING RISKS
32%
Flag icon
FIGURE 6.1 There are many ways risks can combine, ranging from perfect anticorrelation (A) to perfect correlation (B). An important case in between is independence (C). The total risk posed depends on how much risk is ‘wasted’ in the overlap—the region where we’d suffer a catastrophe even if we eliminated one of the risks. A large overlap reduces the total risk, but also reduces the benefits from eliminating a single risk.
33%
Flag icon
RISK FACTORS
33%
Flag icon
MATHEMATICS OF RISK FACTORS
34%
Flag icon
WHICH RISKS?
34%
Flag icon
The importance of a problem is the value of solving it.
34%
Flag icon
My colleague at the Future of Humanity Institute, Owen Cotton-Barratt, has shown that when these terms are appropriately defined, the cost-effectiveness of working on a particular problem can be expressed by a very simple formula:32
34%
Flag icon
Cost-Effectiveness = Importance × Tractability × Neglectedness
34%
Flag icon
The model also shows us how to make trade-offs between these dimensions. For example, when choosing between two risks, if their probabilities differed by a factor of five, this would be outweighed by a factor of ten in how much funding they currently receive. Indeed, the model suggests a general principle:
34%
Flag icon
Proportionality
34%
Flag icon
When a set of risks have equal tractability (or when we have no idea which is more tractable), the ideal global portfolio allocates resources to each risk in pro...
This highlight has been truncated due to consecutive passage length restrictions.
34%
Flag icon
EARLY ACTION
35%
Flag icon
7 SAFEGUARDING HUMANITY There are no catastrophes that loom before us which cannot be avoided; there is nothing that threatens us with imminent destruction in such a fashion that we are helpless to do something about it. If we behave rationally and humanely; if we concentrate coolly on the problems that face all of humanity, rather than emotionally on such nineteenth century matters as national security and local pride; if we recognize that it is not one’s neighbors who are the enemy, but misery, ignorance, and the cold indifference of natural law—then we can solve all the problems that face ...more
35%
Flag icon
GRAND STRATEGY FOR HUMANITY