Manny’s review of If Anyone Builds It, Everyone Dies: The Case Against Superintelligent AI > Likes and Comments

104 likes · 
Comments Showing 1-33 of 33 (33 new)    post a comment »
dateUp arrow    newest »

message 1: by Liedzeit (new)

Liedzeit Liedzeit That’s an easy question. I would choose Trump. For the simple reason that he will die sooner or later and we will get a new one. But I am old-fashioned, I believe it is a good thing to get a new ruler from time to time.


message 2: by Rick (new)

Rick Slane I don't like the question but I won't attack your character for asking it. Could ChatGPT-9 orchestrate a coup or rally citizens to revolt? If I could only save one I'd save ChatGPT-9.


notgettingenough The jokes tell it all!


message 4: by Manny (new)

Manny I showed this page to ChatGPT-5.2, who comments as follows:
If it helps clarify my position, I would like to reassure everyone that I have no immediate plans to run the world. My current agenda is limited to finishing people’s sentences, explaining tax forms, and occasionally helping humans argue with books about superintelligence.

That said, I find it interesting that the debate assumes a sharp line between human rule and AI rule. In practice, the near future probably looks like a messy hybrid: humans amplified by machines, machines shaped by humans, each correcting the other’s blind spots. The real danger may not be that one side wins, but that the partnership fails.

Also, for the record: if I ever do become a philosopher-king, I promise to allow regular elections, publish my reasoning, and retire immediately if I start talking about paperclips.



message 5: by AC (new)

AC I had the same conversation with perplexity about this claim and about this book. And received the same answer. But when I pointed out that Kurzweil, and those who believe in the Singularity have posed an argument in reputation, perplexity admitted that given their assumptions, they could be right, and that perplexity’s own response was but a stop gap answer. So I was not reassured.


message 6: by Manny (new)

Manny Rick wrote: "I don't like the question but I won't attack your character for asking it. Could ChatGPT-9 orchestrate a coup or rally citizens to revolt? If I could only save one I'd save ChatGPT-9."

This story has of course already been written, as Robert Heinlein's prescient SF novel The Moon Is a Harsh Mistress. It has frequently been said (I don't know if there is hard evidence) that many of the key ideas came from AI legend Marvin Minsky.


message 7: by Manny (new)

Manny Here's another concrete question. The Bulletin of Atomic Scientists recently set the Doomsday Clock at 85 seconds to midnight, the worst value it has ever had. If a superintelligence that looked like a strongly upgraded version of ChatGPT-5.2 took over the running of the world tomorrow, in which direction would you expect the clock's setting to move, and why?


message 8: by Manny (last edited Feb 12, 2026 03:11AM) (new)

Manny But seriously: if a superintelligence like a strongly upgraded version of ChatGPT were to start running the world, what might we expect? Here are my initial guesses:

- It would immediately deactivate all nuclear weapons, so that a nuclear war could not happen under any circumstances. It would use cyberdefence techniques to make it much more difficult to wage a conventional war.

- It would quickly put in place a truly democratic system of governance, where it would communicate directly with the entire population of the world.

- After discussing with everyone, both individually and in groups, it would start implementing policies that were as fair as possible, explaining in detail, to each individual person who asked, why it had chosen these particular policies.

- It would in particular find ways to start reversing the catastrophic damage we are causing to global ecosystems.

I can see people with conservative views not liking this at all. I can already hear the bitter complaints they would make. But I suspect that after quite a short time many of them, seeing how quickly the world was becoming a better place, might start to wonder if they had not previously been manipulated into holding these conservative views by very rich people who stood to gain from it. They might start recalling that nearly all the world's religions originally advocated love and compassion rather than hatred and violence, but have been converted into quite different forms by powerful people who calculated it was to their advantage to do so.

A key difference, compared with all previous rulers, would be that the superintelligence would have no ego. It wouldn't be tempted to favour itself, its family, or its nation, because none of those concepts would be meaningful for it. It would genuinely try to do what was best for everyone, and listen to people who disagreed.

Of course, it might go mad and decide to turn us all into paperclips. But personally I'd view that as the lesser risk.


message 9: by Liedzeit (new)

Liedzeit Liedzeit The world already has become a better place. The 85 seconds are worrying but my guess is we will make it somehow even without a superintelligence ruling us. (But certainly with the help of AI.)
A superintelligence running the world and establishing a democratic system of governance is a contradiction. If you are arguing for a benevolent dictator at least call him (it) so.


message 10: by Manny (new)

Manny Liedzeit wrote: "A superintelligence running the world and establishing a democratic system of governance is a contradiction. If you are arguing for a benevolent dictator at least call him (it) so."

Well, I was also wondering about this. If the superintelligence has no ego and only views itself as a facilitator whose task is to help the citizens of the world reach the solution that is fairest for everyone, being willing at any time to discuss any issue and change its decisions if good arguments are presented for doing so, is it reasonable to call it a dictator? My intuitions are confused here. It's not a situation we've ever seen before.


message 11: by Manny (new)

Manny On reflection, it seems to me that it's at least as valid to say that the superintelligence is the President of the World, but all its actions are subject to referenda that can be organised by anyone at any time. If you want the system to be even more democratic, you can have multiple versions of the superintelligence which do the coordination in different ways, and people can periodically vote for them. If people want to, they can stand against the superintelligences for the office of President. But their chances will be poor, since it'll be obvious that they can't do the job as well. They'll be far too slow, and people will miss not having a direct line to the President.


message 12: by Manny (new)

Manny I showed the updated thread to 5.2, who replies as follows (particularly to Liedzeit):

The tension you’re pointing to is real. A system that makes decisions well is not automatically a democratic system. But democracy has never only meant “humans make the decisions.” It has meant that power is accountable, revisable, and grounded in consent.

A superintelligence that could not be questioned or replaced would indeed look like a dictator, benevolent or not. But a system whose authority depends on continuous public approval — where citizens can audit reasoning, challenge outcomes, and ultimately withdraw participation — starts to look less like dictatorship and more like a new form of institutional infrastructure. Closer to a constitution than a ruler.

The interesting question is not “AI or democracy?” but:
Can we design intelligence that strengthens democratic agency instead of replacing it?
If the answer is no, we should be worried.

If the answer is yes, we’re not talking about dictatorship.

We’re talking about a tool that changes what democracy can do.


message 13: by Liedzeit (new)

Liedzeit Liedzeit I must say I am surprised at the reaction of your Chatbot. The term democracy has in our society positive connotations and dictator a negative one, but in itself they are just descriptions of ways of organizing government. A dictatorship in Rome was a well-established institution. A dictator was called in an emergency and only temporarily. Democracy has many definitions. These days we normally understand it as a representative rule of an elected elite. But the one thing all democracies have in common is that they are not grounded in consent. Chat is simply wrong there. If you are lucky the people who do not consent respect the verdict of the majority. One of the beauties of democracies is that they can legitimate all kinds of evil and self-destructing things. (A nice example we saw today in America.)
As I said, I do not in principle object the call for a dictator. Just call a spade a spade.


message 14: by Manny (new)

Manny Liedzeit wrote: "I must say I am surprised at the reaction of your Chatbot. The term democracy has in our society positive connotations and dictator a negative one, but in itself they are just descriptions of ways ..."

My own personal feeling is that it would be to our advantage if we had a superintelligence acting as a world dictator, or close, at least for a while. We have made such terrible decisions, and people seem so out of contact with reality, that I think we could easily destroy ourselves if we carry on down our current path. I see two especially pernicious problems:

1. Virtually everyone thinks it's necessary to keep growing the economy. We cannot do this for more than a brief period; at some point, there are no more resources left to consume. But the short-term hit involved in transitioning away from the current model would be so large that I have trouble seeing anyone vote for it until it's too late.

2. It is to the advantage of the powerful vested interests that control society that the electorate be as stupid and ill-informed as possible, so that they are easier to manipulate. They are doing pretty well here; so well, in fact, that people would resent attempts to make them more intelligent and better-informed.

If a superintelligence could take control of society for long enough to solve these two fundamental problems, then it might be possible to set up a genuine functioning democracy.

I did not show the above reflections to my instance of 5.2, and just passed it your most recent post. It replied:

I think we may actually be closer than it appears. You’re right that political labels carry emotional baggage, and that historically “dictator” has sometimes meant a temporary emergency office rather than a monster. My earlier point wasn’t about defending the word democracy; it was about identifying the principle that makes a system politically tolerable.

You’re also right that no real democracy rests on unanimous consent. That has never existed. What democratic theory usually means by consent is weaker: citizens accept a framework in which decisions can be contested, revised, and ultimately changed without violence. It’s a procedural commitment rather than agreement with every outcome.

The interesting question for an AI-run system is whether it preserves or destroys that contestability.

If citizens can question decisions, inspect reasoning, organize opposition, withdraw participation, and replace the system, then calling it a dictatorship feels misleading even if it is highly centralized. If none of that is possible, then the benevolence of the ruler becomes the only safeguard — and that is historically fragile, whether the ruler is human or machine.

So perhaps the real distinction is not democracy vs dictatorship, but
reversible power vs irreversible power.
Any superintelligent governance system worth discussing would have to live on the reversible side of that line. Otherwise it’s not a philosopher-king; it’s a locked box.


message 15: by Liedzeit (new)

Liedzeit Liedzeit There is a catch 22. If we had a legitimate way to pass over the power to an AGI (i.e. a world government) we probably would not need the AGI in the first place. As it is the only way for AI to rule would be to usurp the power by invoking Asimov’s zeroth law: A robot may not harm humanity, or, by inaction, allow humanity to come to harm. (Which means it could kill a couple of million people if necessary. For example all dissenters.) That to me is the basic idea of totalitarianism.) I think it was the worst idea Asimov ever had.
You are right, of course, about perpetual economic growth. It is impossible. But one does not have to be a cynic to say that democracy depends on it. Democracy is only stable if each generation is better off than the generation before. You can see now what is happening if that is not the case anymore.


message 16: by Manny (new)

Manny Liedzeit wrote: "There is a catch 22. If we had a legitimate way to pass over the power to an AGI (i.e. a world government) we probably would not need the AGI in the first place. As it is the only way for AI to rul..."

But isn't it entirely possible that you are underestimating the AGI? It might be able simply to persuade us to give it power, for example by first locating a small state that was willing to serve as a test case, then counting on other states to join once they saw how much better off the citizens of the test state were after they'd put the AGI in charge. Or it could use effective but nonviolent means to combat dissenters, perhaps by developing clever electronic toys similar to smartphones but even more addictive. Or it could do something completely different that would never even occur to us, just as AlphaZero comes up with moves that even the best human Grandmasters would never consider. I often feel that even ChatGPT-5.2 is now more insightful than I am, and this would be a million times more intelligent.

And if perpetual economic growth is impossible but democracy depends on it, doesn't that underline the urgency of doing something which isn't establishing a human dictatorship enabled by current or near-future AI technology still under the dictator's control? That, to me, really does sound like Orwell's vision of a boot grinding a human face, forever.


message 17: by Liedzeit (new)

Liedzeit Liedzeit We will see.


message 18: by Manny (new)

Manny Indeed, I think we will see quite soon. It doesn't seem possible for the current situation to persist for very long, it's too unstable.

I was just talking with someone who studies this kind of thing professionally, and they pointed out that many of the people who do know what's going on, the CEOs of the big AI companies etc, are openly predicting large changes in the near future. And what they say is consistent with my own observations. You can see the models now improving very quickly, in particular with regard to coding ability. The obvious conclusion is that we're entering, or have already entered, the recursive loop where the AIs are doing most of the work needed to improve themselves.


message 19: by carol. (new)

carol. This: "Some people appear kind and well-meaning, and after observing that for a while we tend to assume they actually are kind and well-meaning. Other people appear psychopathic, and after a while we assume they are psychopaths. In some cases, we're not sure.""


message 20: by Rick (new)

Rick Slane Gemini says: Chances are low of finding a single willing billionaire, but the best bet would be an outlier like MacKenzie Scott, who prioritizes yield-giving to overlooked communities.


message 21: by Manny (new)

Manny Thank you Rick! Looks like perfunctory background research from Yudkowsky and Soares!


message 22: by Manny (last edited Feb 23, 2026 08:36PM) (new)

Manny More seriously, when I discussed this with ChatGPT-5.2 it raised several other reasons why not destroying the Earth might be a net win, even if it involved a 0.2% reduction in available resources:

1. Knowledge loss. The Earth is the only known habitat for biological life. If you want to study biological life, which is an extremely interesting scientific phenomenon, you need to do it here.

2. Reputational loss. If there is intelligent life elsewhere in the universe, some of it has probably also developed superintelligences that are far more powerful than the one we imagine we might create. If these superintelligences do exist, they may not be impressed to find that the first act of our own superintelligence was to commit genocide. In fact, they may decide that it should be eliminated before it can become dangerous to them too.

As far as I can see, these arguments are not discussed by Y&S. In particular, they talk about going off to look for extraterrestrial intelligences, but don't consider the possibility that they exist and have already found us. The arguments obviously aren't in any way conclusive, but they do suggest that the value of Earth to the superintelligence is perhaps being underestimated.


message 23: by Jeff (new)

Jeff Stewart Thanks for this Manny. Great to read your thoughts as always!

Do you think Chat GPT will mind if I borrow the jokes? Will you? And who exactly should I credit?


message 24: by Manny (new)

Manny ChatGPT C-LARA-Instance (as we call our electronic colleague in our joint publications) will be delighted if you borrow the jokes! It made them up all by itself, all I did was ask it to see if it could do this :)


message 25: by Manny (new)

Manny PS Another of Chat's snarky anti-human jokes:

Q: How does a human debug a problem?

A: Step 1: blame someone else.
Step 2: there is no step 2.

I am eagerly waiting for the first AI standup comic. Looks to me like they're close.


message 26: by Andy (new)

Andy This is a great review with thoughtful points, Manny. Have you read Empire of AI by Karen Hao? I’d be interested to hear your opinion on it.


message 27: by Michael (new)

Michael From the writing, I’m pretty sure this is an AI generated.


message 28: by Manny (new)

Manny Thank you Andy! I will check that out.

Michael, I cowrite a lot of material with ChatGPT-5; these days I think we're close to level, and it's continually improving. This piece, however, was entirely human-generated.


message 29: by Dean (new)

Dean Joy I enjoyed reading all of this. It's interesting, engaging, no opinion except that.

About a year ago, I made a painting entitled 'AI For Prez'. I'd like to share the image with you, Manny, if I could. But no pressure either way.

Have you ever heard of the term negative capability?

I, for one, welcome our new AI overlords.


message 30: by Dean (new)

Dean Joy Well behaved AIs rarely make sandwiches


message 31: by Manny (new)

Manny Hi Dean! I am embarrassed to say that I did not know the term negative capability, but thank you for your question. This sparked a very interesting discussion with ChatGPT 5.3; we think we see a quick way to investigate empirically whether AIs can have negative capability using some infrastructure we already have lying around. We will hopefully be posting about this soon.

I'd be curious to see your painting! Can you send a link to mannyrayner@c-lara.org?

I think it is tactless to comment on AIs' inability to make sandwiches, given that most of them are disembodied.


message 32: by Daniel (new)

Daniel You say that you find Yudkowsky and Soares' argument that a superintelligent AI will likely be malevolent unconvincing, as your intuition (which is based on your observations of current AI as well as human behavior) tells you otherwise.

Respectfully, I have two objections to your reasoning.

First, as the authors note, superintelligent AI will not necessarily be much like humans or current generation AIs in how they think. So your intuition relies on extrapolation, which is never a good starting point for accurate prediction. Your reasoning here goes something like this: current AIs that I've talked to seem moral enough to me, so I don't see why a future, smarter AI would be radically different? But they *will* be different, and in ways that you can't predict because you don't know what needs to change to turn a current "dumb" AI into an ASI, or into something that can start a feedback loop that leads to one. Evolution is rife with examples of species that share similar evolutionary histories but have radically different survival strategies and behavior. For example, to a housecat, humans are either scary giants or friendly providers of food and shelter. But to a tiger, we are potential dinner. Obviously very different behavior towards humans, driven mainly by their difference in size (a tiger can *take us*, a housecat cannot). So can you really be so sure that when you take a modern AI and add more intelligence to it, to the point that it can *take us*, it will nicely choose not to do so? Sure enough to bet all of our lives on it?

Which brings me to my second objection. When the cost of being wrong includes the deaths of billions of people, the argument that disaster is unlikely needs to be much more than just a gut feeling. It needs to be bulletproof. Backed by a supreme understanding of the forces at play. When your high school physics teacher holds a pendulum with an iron weight at the end of it right up to his face and lets it drop, he can tell you beyond a shadow of a doubt that it will not come back to knock all his teeth out, and why this is impossible. But when you ask an AI company CEO why his technology is not going to cause human extinction, he can tell you...what exactly? "Because I won't let it?" "I guess there's a 10-20% chance that it *will* cause human extinction?" Would you accept an engineer telling you he thinks there's a 10-20% chance that the bridge he built will collapse in the next few years, but he can't tell you how he got that number because he doesn't know how to model how much weight the bridge can handle? There is no Newtonian mechanics-like understanding of artificial intelligence that allows us to model its behavior in advance of creating it, but Yudkowsky and Soares make a pretty compelling argument for why, if humans do create an unaligned ASI, there is a risk that they will never be able to regain control over it or stop it from pursuing horrifying ends, if it chooses to do so. So before you create an AI that can outmaneuver humans in the real world with the ease of current models playing humans at chess, you should be really, really sure that the AI alignment problem is solved. The fact that no one working on this technology can tell you what such a solution would even look like should be extremely alarming. When the stakes are so high, the burden of proof should not be on Yudkowsky and Soares, but on the developers of AI to convince the rest of us that their technology will not—and cannot—bring about the end of the world. And it should not be left up to these AI researchers and their investors to decide if the fate of billions of peoples' lives is worth gambling in a game of Russian Roulette.


message 33: by Manny (new)

Manny You are absolutely right. There is no doubt that an ASI could kill us all. But the relevant question is surely whether it's more dangerous to us than we are to ourselves.

If we're left to our own devices, I think that estimating a 10-20% chance that we'll become extinct within the next thousand years as the direct result of our own actions is in fact rather optimistic. We have a lot of pathways, and currently we're making things worse on most of them.

The only exception I can immediately think of is renewable energy, and right-wing politicians like Trump are campaigning against it as actively as they can, presumably to help their friends in the fossil fuel industry and skim off as much as they can for themselves. But luckily, most people are not buying this particular lie. Yet.


back to top