More on this book
Community
Kindle Notes & Highlights
by
Toby Ord
Read between
January 30 - March 20, 2022
First, it is profoundly underfunded. This global convention to protect humanity has just four employees, and a smaller budget than an average McDonald’s.
INFORMATION HAZARDS
It is not just pathogens that can escape the lab. The most dangerous escapes thus far are not microbes, but information; not biohazards, but information hazards.64
UNALIGNED ARTIFICIAL INTELLIGENCE
Could developments in AI pose a risk on this largest scale?
Asked when an AI system would be ‘able to accomplish every task better and more cheaply than human workers’, on average they estimated a 50 percent chance of this happening by 2061 and a 10 percent chance of it happening as soon as 2025.85
FIGURE 5.1 Measures of progress and interest in artificial intelligence. The faces show the very rapid recent progress in generating realistic images of ‘imagined’ people. The charts show longterm progress in chess AI surpassing the best human grand masters (measured in Elo), as well as the recent rise in academic activity in the field—measured by papers posted on arXiv, and attendance at conferences.86
In the words of Demis Hassabis, co-founder of DeepMind:
We need to use the downtime, when things are calm, to prepare for when things get serious in the decades to come. The time we have now is valuable, and we need to make use of it.115
DYSTOPIAN SCENARIOS
So far we have focused on two kinds of existential catastrophe: extinction and the unrecoverable collapse of civilisation. But these are not the only possibilities.
This could be a world without humans (extinction) or a world without civilisation (unrecoverable collapse). But it could also take the form of an unrecoverable dystopia—a world with civilisation intact, but locked into a terrible form, with little or no value.116
We can divide the unrecoverable dystopias we might face into three types,
There are possibilities where the people don’t want that world, yet the structure of society makes it almost impossible for them to coordinate to change it. There are possibilities where the people do want that world, yet they are misguided and the world falls far short of what they could have achieved. And in between there are possibilities where only a small group wants that world but enforces it against the wishes of the rest. Each of these types has different hurdles it would need to overcome in order to become truly locked in.
FIGURE 5.2 An extended classification of existential catastrophes by the kind of out...
This highlight has been truncated due to consecutive passage length restrictions.
Note that to count as existential catastrophes, these outcomes don’t need to be impossible to break out of, nor to last millions of years. Instead, the defining feature is that entering that regime was a crucial negative turning point in the history of human...
This highlight has been truncated due to consecutive passage length restrictions.
OTHER RISKS
One of the most transformative technologies that might be developed this century is nanotechnology.
UNFORESEEN RISKS
Nick Bostrom has recently pointed to an important class of unforeseen risk.138 Every year as we invent new technologies, we may have a chance of stumbling across something that offers the destructive power of the atomic bomb or a deadly pandemic, but which turns out to be easy to produce from everyday materials. Discovering even one such technology might be enough to make the continued existence of human civilisation impossible.
PART THREE THE PATH FORWARD
6 THE RISK LANDSCAPE A new type of thinking is essential if mankind is to survive and move toward higher levels. —Albert Einstein
QUANTIFYING THE RISKS
The numbers represent my overall degrees of belief that each of the catastrophes will befall us this century. This means they aren’t simply an encapsulation of the information and argumentation in the chapters on the risks. Instead, they rely on an accumulation of knowledge and judgement on each risk that goes beyond what can be distilled into a few pages. They are not in any way a final word, but are a concise summary of all I know about the risk landscape.
Existential catastrophe via Chance within next 100 years
ANATOMY OF AN EXTINCTION RISK
My colleagues at the Future of Humanity Institute have suggested classifying risks of human extinction by the three successive stages that need to occur before we would go extinct:
Origin: How does the catastrophe get started?
Some are initiated by the natural environment, while others are anthropogenic. We can usefully break anthropogenic risks down according to whether the harm was intended, foreseen or unforeseen. And we can further break these down by whether they involve a small number of actors (such as accidents or terrorism) or a large number (such as climate change or nuclear war).
Scaling: How does the catastrophe reach a...
This highlight has been truncated due to consecutive passage length restrictions.
It could start at a global scale (such as a climate change) or there could be a mechanism that scales it up. For example, the sunlight-blocking particles from asteroids, volcanoes and nuclear war get spread across the world by the Earth’s atmospheric circulation while pandemics are scal...
This highlight has been truncated due to consecutive passage length restrictions.
Endgame: How does the catastrophe fi...
This highlight has been truncated due to consecutive passage length restrictions.
How does it kill everyone, wherever they are? Like the dust kicked up by an asteroid, the lethal substance could have spread everywhere in the environment; like a pandemic it could be carried by people wherever people go; or in an intentional plan to cause extinctio...
This highlight has been truncated due to consecutive passage length restrictions.
This classification lets us break down the probability of extinction into the product of (1) the probability it gets started, (2) the probability it reaches a global scale given it gets started, and (3) the probability it causes extinction given it reaches a global scale:
pextinction = porigin × pscaling × pendgame
COMBINING AND COMPARING RISKS
FIGURE 6.1 There are many ways risks can combine, ranging from perfect anticorrelation (A) to perfect correlation (B). An important case in between is independence (C). The total risk posed depends on how much risk is ‘wasted’ in the overlap—the region where we’d suffer a catastrophe even if we eliminated one of the risks. A large overlap reduces the total risk, but also reduces the benefits from eliminating a single risk.
RISK FACTORS
MATHEMATICS OF RISK FACTORS
WHICH RISKS?
The importance of a problem is the value of solving it.
My colleague at the Future of Humanity Institute, Owen Cotton-Barratt, has shown that when these terms are appropriately defined, the cost-effectiveness of working on a particular problem can be expressed by a very simple formula:32
Cost-Effectiveness = Importance × Tractability × Neglectedness
The model also shows us how to make trade-offs between these dimensions. For example, when choosing between two risks, if their probabilities differed by a factor of five, this would be outweighed by a factor of ten in how much funding they currently receive. Indeed, the model suggests a general principle:
Proportionality
When a set of risks have equal tractability (or when we have no idea which is more tractable), the ideal global portfolio allocates resources to each risk in pro...
This highlight has been truncated due to consecutive passage length restrictions.
EARLY ACTION
7 SAFEGUARDING HUMANITY There are no catastrophes that loom before us which cannot be avoided; there is nothing that threatens us with imminent destruction in such a fashion that we are helpless to do something about it. If we behave rationally and humanely; if we concentrate coolly on the problems that face all of humanity, rather than emotionally on such nineteenth century matters as national security and local pride; if we recognize that it is not one’s neighbors who are the enemy, but misery, ignorance, and the cold indifference of natural law—then we can solve all the problems that face
...more
GRAND STRATEGY FOR HUMANITY