More on this book
Community
Kindle Notes & Highlights
by
Toby Ord
Read between
July 13 - July 15, 2020
This book argues that safeguarding humanity’s future is the defining challenge of our time.
Humanity lacks the maturity, coordination and foresight necessary to avoid making mistakes from which we could never recover. As the gap between our power and our wisdom grows, our future is subject to an ever-increasing level of risk.
Ours is a world of flawed decision-makers, working with strikingly incomplete information, directing technologies which threaten the entire future of the species.
We need to take decisive steps to end this period of escalating risk and safeguard our future.
Until the Industrial Revolution, any prosperity was confined to a tiny elite with extreme poverty the norm.
The very fact that these risks stem from human action shows us that human action can address them.
During the twentieth century, my best guess is that we faced around a one in a hundred risk of human extinction or the unrecoverable collapse of civilization. Given everything I know, I put the existential risk this century at around one in six: Russian roulette.
If I’m even roughly right about their scale, then we cannot survive many centuries with risk like this. It is an unsustainable level of risk.
existential risk is greatly neglected: by government, by academia, by civil society.
civilization has already been independently established at least seven times by isolated peoples.
because, in expectation, almost all of humanity’s life lies in the future, almost everything of value lies in the future as well:
a longtermist ethic is nevertheless especially well suited to grappling with existential risk.
if we drop the baton, succumbing to an existential catastrophe, we would fail our ancestors in a multitude of ways. We would fail to achieve the dreams they hoped for; we would betray the trust they placed in us, their heirs; and we would fail in any duty we had to pay forward the work they did for us.
The international body responsible for the continued prohibition of bioweapons (the Biological Weapons Convention) has an annual budget of just $1.4 million—less than the average McDonald’s restaurant.
we can state with confidence that humanity spends more on ice cream every year than on ensuring that the technologies we develop do not destroy us.56
protection from existential risk is a global public good—one where the pool of beneficiaries spans the globe. This means that even nation states will neglect it.
Protection from existential risk is an intergenerational global public good. So even the entire population of the globe acting in concert could be expected to undervalue existential risks by a very large factor, leaving them greatly neglected.
Even when experts estimate a significant probability for an unprecedented event, we have great difficulty believing it until we see it.
In my view, no other existential risk is as well handled as that of asteroids and comets.
By my estimate, we face about a thousand times more anthropogenic risk over the next century than natural risk, so it is the anthropogenic risks that will be our main focus.
Robock and colleagues have also modeled a limited nuclear exchange between India and Pakistan, with arsenals a fraction of the size of the US and Russia, and find a significant nuclear winter effect.
human activity has already released more than an entire biosphere worth of carbon into the atmosphere.
when Leo Szilard and Enrico Fermi first talked about the possibility of an atomic bomb: “Fermi thought that the conservative thing was to play down the possibility that this may happen, and I thought the conservative thing was to assume that it would happen and take all the necessary precautions.”
people tend to develop technologies as soon as the opportunity presents itself and deal with the consequences later.
Imagine if the scientific establishment of 1930 had been asked to compile a list of the existential risks humanity would face over the following hundred years. They would have missed most of the risks covered in this book
Overall, I think the chance of an existential catastrophe striking humanity in the next hundred years is about one in six. This is not a small statistical probability that we must diligently bear in mind, like the chance of dying in a car crash, but something that could readily occur, like the roll of a die, or Russian roulette.
I’ve made allowances for the fact that we will likely respond to the escalating risks, with substantial efforts to reduce them.
We can call the difference between Pr(X|f = fsq) and Pr(X|f = fmin) the contribution that F makes to existential risk.
we could call the difference between Pr(X|f = fsq) and Pr(X|f = fmax) the potential of F.
early action is higher leverage, but more easily wasted. It has more power, but less accuracy.
We currently spend less than a thousandth of a percent of gross world product on them. Earlier, I suggested bringing this up by at least a factor of 100, to reach a point where the world is spending more on securing its potential than on ice cream, and perhaps a good longer-term target may be a full 1 percent.
If we behave rationally and humanely; if we concentrate coolly on the problems that face all of humanity, rather than emotionally on such nineteenth century matters as national security and local pride; if we recognize that it is not one’s neighbors who are the enemy, but misery, ignorance, and the cold indifference of natural law—then we can solve all the problems that face us. We can deliberately choose to have no catastrophes at all. —Isaac Asimov
the fact that we are at such an early stage in thinking about the longterm future of humanity also provides us with reason to be hopeful as we begin our journey.
there appear to be no major obstacles to humanity lasting an extremely long time, if only that were a key global priority.
our uncertainty about the underlying physical probability is not grounds for ignoring the risk, since the true risk could be higher as well as lower.
Multilateral action can resolve this tragedy of the commons, replacing a reliance on countries’ altruism with a reliance on their prudence: still not perfect, but a much better bet.
there is a need for international institutions focused on existential risk to coordinate our actions. But it is very unclear at this stage what forms they should take.
in 1948 Einstein wrote: I advocate world government because I am convinced that there is no other possible way of eliminating the most terrible danger in which man has ever found himself. The objective of avoiding total destruction must have priority over any other objective.34
Perhaps this could be done through establishing a kind of constitution for humanity, and writing into it the paramount need to safeguard our future, along with the funding and enforcement mechanisms required.
in 1997, UNESCO passed a Declaration on the Responsibilities of the Present Generations Towards Future Generations.
humanity is akin to an adolescent, with rapidly developing physical abilities, lagging wisdom and self-control, little thought for its longterm future and an unhealthy appetite for risk.
While it may be too difficult to prevent the development of a risky technology, we may be able to reduce existential risk by speeding up the development of protective technologies relative to dangerous ones.
Don’t be tribal. Safeguarding our future is not left or right, not eastern or western, not owned by the rich or the poor. It is not partisan. Framing it as a political issue on one side of a contentious divide would be a disaster.
If you work in computer science or programming, you might be able to shift your career toward helping address the existential risk arising from AI: perhaps through much-needed technical research on AI alignment,
Many are looking for skilled people who really grasp the unusual mission. If you have any of these skills—for example, if you have experience working on strategy, management, policy, media, operations or executive assistance—you could join one of the organizations currently working on existential risk.
When you donate money to a cause, you effectively transform your own labor into additional work for that cause.
accelerating expansion also puts a limit on what we can ever affect. If, today, you shine a ray of light out into space, it could reach any galaxy that is currently less than 16 billion light years away. But galaxies further than this are being pulled away so quickly that neither light, nor anything else we might send, could ever affect them.34 And next year this affectable universe will shrink by a single light year. Three more galaxies will slip forever beyond our influence.
there are 20 billion galaxies that our descendants might be able to reach. Seven-eighths of these are more than halfway to the edge of the affectable universe—so distant that once we reached them no signal could ever be sent back. Spreading out into these distant galaxies would thus be a final diaspora,
We might compare our situation with that of people 10,000 years ago, on the cusp of agriculture. Imagine them sowing their first seeds and reflecting upon what opportunities a life of farming might enable, and on what the ideal world might look like. Just as they would be unable to fathom almost any aspect of our current global civilization, so too we may not yet be able to see the shape of an ideal realization of our potential.