Is AI Rewriting the Rules of War?
Is Trump a Hamlet-like genius who hides sharp strategic skills behind a buffoonish facade? Or is he helped by something much smarter than him?
At the time of the invasion of Iraq, in 2003, a legend appeared that said that George W. Bush was influenced in his decision by some material on “peak oil” by ASPO (Association for the Study of Peak Oil). Of course, nobody will ever know what went on inside Bush’s mind, but I don’t see this interpretation as impossible.
At that time, the idea of peak oil was commonly discussed, and just as commonly vehemently rejected. But one thing is what people say, and one is what people do. The human mind is a mishmash of half-baked ideas, and as Daniel Dennett said, we are all meme-infested apes. So, the “peak oil” meme may have played an important role in deciding the invasion of Iraq, and we all know the results. Enormous costs, the death of large numbers of people, and no evident return in terms of controlling the Middle Eastern oil resources.
Even worse was the case of Afghanistan. Twenty years of war, at least two trillion dollars spent, plus hundreds of thousands of casualties. In a previous post, I argued that the Afghan campaign was the result of the incompetence of US government officials who misunderstood the results of a geological survey of the Caspian oil resources. That originated another memetic infection in the form of the legend of the “New Saudi Arabia,” comparable to the legend of the land of “Prester John” at the time of the Crusades. These fabled oil resources could be reached by land only by taking control of Afghanistan, and that implied an enormously expensive land operation. Consider also the case of the Russian Operation in Ukraine; there, too, military planners don’t seem to have been able to do much better.
Human history is dotted with gigantic strategic blunders that lead to military, economic, and social disaster. Aurelien describes strategic decision makers in no uncertain terms:
“Western politics is essentially a gigantic echo-chamber on the subject. Everyone who briefs you, everyone who attends the meetings you attend, everybody who briefs them, everybody you meet at receptions and in the margins of meetings, has basically the same opinions. Your colleagues in other governments, the Opposition spokesman on your subject, the Parliamentary Committee, the Secretary-General of NATO, the journalists who interview you, the EU Commission, think-tanks and influential retired politicians, will all be saying much the same thing. What we have here is quite close to a collective fantasy, a collective hallucination, or a process by which people collectively hypnotise each other. It’s groupthink on an epic scale.”
Personally, I have never been involved in strategic military planning, but I recognize Aurelien’s words as perfectly describing the way governments work at the levels I have direct experience with. Governmental decisions are mainly driven by incompetence coupled with groupthink. The potential for enormous blunders is clear, and we saw plenty of them in recent history, at all levels.
Yet, something may have changed with the operation against Iran. It was nothing like the extravagantly expensive adventures of the US in Iraq and Afghanistan. It was a brief and ruthless operation, not accompanied by the usual, massive “consensus-building” campaign designed to demonize the enemy. The public opinion, largely unfavorable to the war, was simply ignored. The Iranians limited themselves to a small and ineffective retaliation. To understand how the Iranian Government could have reasoned, you can read this article by Chuck Pezeshky (key sentence: “Iran is a western venue, and people like their creature comforts.”). Wars are, in the end, a form of communication; brutal as much as you like, but that’s what they are. And, in this case, the two sides communicated to each other that neither wanted to go all the way through.
War is always madness, but, in this case, it appears that there was some method in it. Which mind was at work behind the scenes? Is Trump a Hamlet-like genius who hides sharp strategic skills behind a buffoonish facade? Nobody can say, but my impression is that he is smart, yes, but a genius, no. So, what caused the change in the behavior of the US military machine?
I think it is possible to propose an answer: Artificial Intelligence.
You know that AIs have made impressive strides since the introduction of ChatGPT in 2022, and now they are embedded in all sectors of human decision-making operations. We know that they are used at the tactical level, for instance, to operate drones. But it is likely, almost certain, that they are also used at the strategic level. Just like in the case of peak oil, people won’t say what memes are floating in their minds, but it is certain that many are affected by AIs, and some may be defined as addicted.
Of course, I can’t know what role AI has in the current strategic decisions. Asking the chatbots themselves wouldn’t work. So, I thought of a test. What could have happened if AI had been available when the major blunders in Iraq and Afghanistan were made? So, I asked Grok 3 and ChatGPT how they would have advised the American Government about invading Afghanistan on the basis of the data available at the time. AI bots can do something that we humans can’t even dream of being able to do: carry out an analysis of the past without being affected by emotional or political factors.
Both Grok and ChatGPT said that they would have recommended not to invade Afghanistan, and they both understood that the perception of the time of the “immense oil resources” of the Caspian Region was much exaggerated. They weren’t affected by groupthink, nor they had to show off, or gain power points in the group. Their suggestion was simply based on an analysis of the available data. Not that they were peaceniks; they did suggest targeted bombing on Al Qaeda positions. But note how, if AIs gave the same advice to the US government for Iran, then we can understand why the operation was limited to a bombing strike.
Of course, I understand that I am just proposing a hypothesis, and I have no way to know how deeply embedded AI is in the decision process of the US government. And, of course, I know that AI chatbots can be “tweaked” to provide the answers that users want. But, on the whole, I believe that we are facing a positive development that can change many things in the future.
And we ain’t seen nothing yet. The “DOGE” thing was a fleeting moment, but it was a remarkable innovation in government management. We are going to see many things change in the future. For good or for bad? As usual, we march into the future without a map.


