I’d urge you to at least accept the mildest version of doomerism, this simple, one-sentence statement on AI risk—“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war”—which was signed by the CEOs of the three most highly-regarded AI companies (Altman’s OpenAI, Anthropic, and Google DeepMind) in 2023 along with many of the world’s foremost experts on AI.