Jump to ratings and reviews
Rate this book

I, Warbot: The Dawn of Artificially Intelligent Conflict

Rate this book
Artificial Intelligence is going to war. Intelligent weapon systems are here today, and many more are on the way tomorrow. Already, they're reshaping conflict--from the chaos of battle, with pilotless drones, robot tanks and unmanned submersibles, to the headquarters far from the action, where generals and politicians use technology to weigh up what to do. AI changes how we fight, and even how likely it is that we will.

In battle, warbots will be faster, more agile and more deadly than today's crewed weapons. New tactics and concepts will emerge, with spoofing and swarming to fool and overwhelm enemies. Strategies are changing too. When will an intelligent machine escalate, and how can it be deterred? Can robots predict the future? And what happens to the 'art of war' as machines themselves become creative?

Autonomous warfare makes many people uneasy. An international campaign against 'killer robots' hopes to ban AI from conflict. But the genie is out--AI weapons are too useful for states to outlaw. Still, crafting sensible rules for warbots is possible. This fascinating book shows how it might be done.

280 pages, Hardcover

Published September 1, 2021

19 people are currently reading
602 people want to read

About the author

Kenneth Payne

21 books5 followers

Ratings & Reviews

What do you think?
Rate this book

Friends & Following

Create a free account to discover what your friends think of this book!

Community Reviews

5 stars
13 (16%)
4 stars
34 (43%)
3 stars
23 (29%)
2 stars
8 (10%)
1 star
0 (0%)
Displaying 1 - 8 of 8 reviews
Profile Image for Asim Bakhshi.
Author 8 books340 followers
January 12, 2022
In nine different stories published in the 1940s, the inimitable science fiction author Isaac Asimov explored the ethical implications of technology by way of imagining a world increasingly inhabited by humanoid autonomous systems. All these stories chart different threads of a singular narrative where a reporter interviews a ‘robopsychologist’. All these stories somehow converge to the issue of ethical programming, with Asimov’s one or more laws of robotics at the center. In one of the stories set around 2019, the robot refuses to follow human order but still does the ‘right’ thing. In another one spun in 2021, a robot is left with a programming error and finds himself in an infinite loop of withholding versus yielding the information.

Kenneth Payne’s recent book exploring the complex interplay of AI and military strategic thinking is a clever wordplay on Asimov’s anthology I, Robot. Payne, being a political psychologist, has been deeply interested in studying the evolution of strategic thinking within the context of warfare. His recent book I, Warbot echoes Arthur C. Clarke’s classic maxim that all sufficiently advanced technologies have always been indistinguishable from magic. In this regard, while AI is no exception, warfare theorists, as well as AI practitioners, must try to mark brittle skills where AI might end up being worse than a toddler.

But while having an interesting theoretical dimension, the problem is not just theoretical. Autonomous weapons systems are a reality and algorithmically driven disruptive technologies are extending boundaries of control in subtle ways. In this context, while we have always been stating unambiguously what we expect out of autonomous weapons, isn’t it time to reflect upon ways where they might behave in unexpected ways?

This takes us back to Asimov’s fictional world where a robot must be designed to follow three laws. One, it may not injure a human being through action or inaction. Two, it must obey human orders unless in conflict with the first law. Three, it must protect itself unless in conflict with both the first and second laws.

However, while these laws might fancy a fictional web of stories, would they provide a rational viewpoint of guiding actual war machines which are built upon layers and layers of arguably inexplicable autonomous computing? Violence, after all, is a distinguishing feature of war and if future Warbots – the lethal robotic machines – are being designed and programmed to kill accurately and relentlessly, how can they incorporate an essential constraint of inefficiency without creating irresolvable paradoxes?

To attempt an answer, Payne offers three laws of Warbots as an opening gambit. Firstly, a warbot should only kill those the owner wants it to and exercise violence in a humanistic way. Secondly, it must understand the owner’s intentions and exercise creativity. Thirdly, it should protect the humans on the owner’s side at all costs including the sacrifice of its life — at the same time, this protection should not be at the expense of the mission.

This gambit is no less than a semantic masterstroke. Among other things, it immediately implies that AI portrayed in film and art is human-like while not being human. The media too cannot break free of science-fictional templates. These media indulgences tell us more about ourselves rather than robots. These are unrealistic expectations of AI which are merely on-screen manipulations and fall quite short of the domain of actual possibilities in autonomous computing. Launching from this critical opening gambit, the rest of the book aims to chart this domain of possibilities.

Since Payne is primarily a political psychologist, a recurring thread in the book is that the minds of the Warbots – the neural connectivity so to speak – will be quite different from the humans. As AI practitioners, we may immediately refer to the fact how state-of-the-art reinforcement learning algorithms are diverging from the classical neural networks. Military tacticians, on the other hand, may refer to the psychological insights of strategic theorists. Carl von Clausewitz, for instance, argued that war is an intense emotional business where ‘passionate hatred’ motivates the belligerents. The commander is an idealized ‘genius’ who makes the right decisions with limited information. While conceding with humility, theorists like Clausewitz felt no qualms in accepting that they were in dark about the complexities of the human mind. Nevertheless, they could state one fact emphatically: the human brain doesn’t work like a machine.

Thus any decision-making technology, if transformed as artificially intelligent warfare, will yield unexpected results. Historical blueprints for creating Warbots are nonexistent. It’s all about working backward from what we want them to achieve. The question boils down to this: what kind of weapons are required by the armed forces? More specifically, what kind of drivers shape these requirements in the first place?

Reducing the first question to functional context disregards the most important paradigm which is cultural. This includes societal attitudes to war and how different strategic cultures rationalize violence as a means to an end. The other question relates to design, i.e. the engineering philosophy as well as the craft. Would we be able to say that Warbots are clever machines? Of course, these are far ahead of humans in terms of computing power, optimized decision-making agency in an extremely constrained environment, and agility of convergence, but would they be considered as ‘clever’ and ‘intuitively informed’ as humans? Isn’t it possible that autonomous problem solving is being misunderstood here with intelligence?

Payne argues at length about how cyber security is being increasingly entangled with AI. To mitigate risks, organizations like DARPA regularly launch grand challenges for AI to automatically find vulnerabilities in code. While these challenges stop here due to ethical concerns, what stops attempts at the next obvious tactical maneuver which is turning defense into the attack by hacking the hacker? These competitions provide insights into new conundrums related to problems of attribution within the context of cyber warfare. If we cannot possibly know who has attacked us, how can we possibly launch a counter offense without inviting chaos?

The situations become further complex when attempts like DeepMind increasingly imitate Asimov’s fictional universe, the terrain where Warbots design other Warbots. The new deep learning algorithms dive into particular environmental constraints and look for features serving as foundations for other reinforcement learning algorithms. This is a meta-learning frontier, where an autonomous agent tries to learn what other autonomous learners need to learn.

This is no surprise that the goal of DARPA’s AI Next program is to build autonomous computers that can ‘reason and think in context’ and function more as ‘colleagues than as tools. The subtle distinction between exploratory creativity and a transformational or collaborative one is hard to miss. Whether it is AlphaZero beating Gary Kasparov, the AlphaGo beating Lee Sedol, or the AI engine winning multi-player no-limit poker game, all are examples of learning by exploratory creativity at its best. Transformation creativity, however, is a true genius. Machines, in this sense, are excellent in ‘thinking’ but can they truly ‘create’? It is only possible if they can ‘understand’.

Payne’s book not only raises a key strategic concern, but it is also timely as well, and equally likely to indulge both military professionals as well as practicing scientists. As engineers, we are well familiar with the ways control systems fail and overshoot bounds of stability. We are also aware of strategic analogs of unstable systems, for instance, Clausewitz’s constraints such as ‘fog-of-war’ and ‘friction’ leading to failure. Can we pursue research in directions where both concerns are combined to achieve semi-autonomous, artificially intelligent agents collaborating with humans and bound by our specific moral constraints? Only time will answer this question since boundaries between fact and fiction are already blurred. The real challenge lies in the preservation of the art of war while augmenting the science of it.
Profile Image for Holger.
131 reviews20 followers
July 19, 2022
This book has broad ambitions, but ends up in the disappointing in-between that is the autonomous weapons' landscape.

The first third is a good introduction about tactics vs. strategy and why you shouldn't expect to hand off all your fighting over to a robot soon, if ever. What is fighting, anyway - targeting, deciding, escalating? Much interesting military history added.

The second third summarizes the state of AI: Tesla. Open AI. Google beating Go. You are most likely familiar, there must be ghostwriters selling these snippets. By now, a lack of structure becomes apparent.

The final part should tie everything back together but doesn't fully. The book's flagship "new 3 laws of robotics" appear on the last six pages.

The book is pleasant enough to read, but doesn't contain much news - at least, you'll learn what defense analysts think about AI systems, which is:
1. The field is hot
2. Much classified work going on
3. things won't happen as fast or be as revolutionary as many people think.
8 reviews
August 21, 2021
A superb exploration of the impact of autonomous systems and AI on warfare - now and into the future. Kenneth Payne is at the leading edge of thinking on this topic, including a revised set of Asimov’s three rules for autonomous systems. A highly recommended book.
Profile Image for Zach Martin.
61 reviews
October 11, 2024
Had to quickly read this for one of my classes this week, and it ended up being pretty interesting for someone like me who’s not really into military stuff. I think Payne offers a compelling exploration of how exactly artificial intelligence is transforming modern warfare as we know it. He does a good job of raising important questions about the role of human oversight in warfare, the potential risks of AI escalation with military operations/technologies, and the sociopolitical challenges that come with regulating such powerful technologies. I think in the coming years, the very nature of how we view conflicts will look a lot different, and a part of me believes we won’t be ready for it when it happens.
86 reviews
September 21, 2025
Torturously written and filled with utterly banal observations derived mostly from a slew of sci-fi movies, Payne’s work is a monument to AI hype without almost any understanding of its subject matter. The author goes through digression after digression - one can conclude, having finished the book, that this often because he almost nothing interesting to say. Far more about warfare than about AI, the book starts with an unclear thesis and devolves from there. Payne masterfully adds further evidence to my literary thesis that journalists should exercise greater empathy with their readers and refrain from writing books.
Profile Image for Sudhagar.
329 reviews2 followers
May 10, 2022
Disappointing after all the praise from the reputable media organizations. The author doesn't break new grounds or provide in-depth view of the emerging technologies. The book neither provides a good strategic view nor an in-depth tactical picture of the field of military AI. The book meanders and jump from one topic to another.
Profile Image for Sara.
118 reviews2 followers
September 9, 2023
Absolutely fascinating read - well written for the non - tech crowd to understand AI, and it’s implications for armed conflict.
12 reviews
December 22, 2024
Though verbose, Payne efficiently covers almost every aspect of concerns over artificially intelligent warfare the reader would be craving to further understand.
Displaying 1 - 8 of 8 reviews

Can't find what you're looking for?

Get help and learn more about the design.