Air Force AI writes battle plans faster than humans can — but some of them are wrong

AFA 2025 — In a recent Air Force experiment, AI algorithms generated attack plans about 400 times faster than human staff, a two-star general told reporters here at the Air Force Association’s Air, Space and Cyber conference. The catch? Not all the AI-generated plans would actually work.

The challenge in the exercise, called DASH-2, was to come up with detailed “Courses Of Action” (COAs) for how to strike a given set of targets with a given set of aircraft and weapons, explained Maj. Gen. Robert Claude, a member of the joint Air Force/Space Force team for the Advanced Battle Management System (ABMS). Human staff using traditional methods generated three COAs in about 16 minutes, Claude said, while AI tools generated 10 COAs in “roughly eight seconds.”

Some quick math averages those rates out: The AI generated 1.25 COAs every second, the humans generated one COA every 5.3 minutes. That’s a 400-fold difference in speed.

That’s radically faster than in the inaugural experiment in the series, this summer’s DASH-1, where the Air Force claimed AI sped up planning “seven-fold” — without making any more mistakes than humans. But not all AIs are created equal, and the best-laid plans of mice, men and machines oft go awry.

In DASH-2, Claude said, “while it was much more timely and there were more COAs generated [by AI than humans], they weren’t necessarily completely viable COAs.”

While he didn’t go into details, he said the errors were not blatant but subtle: more along the lines of failing to factor in the right kind of sensor for specific weather conditions, rather than trying to send tanks on air missions or put glue on pizza. (Of course subtle errors are harder to catch and require more expertise for a human to correct.)

The lesson, Claude said: “What is going to be important going forward is, while we’re getting faster results and we’re getting more results [from AI], there’s still going to have to be a human in the loop for the foreseeable future to make sure that they’re all viable [and] to make the decision.”

That said, Claude was confident future iterations of AI planning aides can get that error rate back down. The name DASH stands for “Decision Advantage Sprint for Human-Machine Teaming,” and as both “dash” and “sprint” imply, the emphasis was on speed, with the participating software development teams having just two weeks to build custom planning tools.

“It’s all, obviously, in how they build the algorithm. You’ve got to make sure that all the right factors are included,” Claude said. “In a two-week sprint, you know, there’s just not time to build all that in with all the checks and balances.”

That’s an acceptable tradeoff for a quick experiment to explore the art of the possible, not for a deployed military system. “If we pursue this route, if we do this for real,” he said, “it’s going to be longer than a two-week coding period.”

The third and final DASH of the year is already underway at the ominously named Shadow Operations Center — Nellis in Las Vegas. “I was actually out for the beginning of DASH-3 last week,” Claude said.

The general was powerfully struck by how much incoming information the Air Force planners in the exercise, known as battle managers, had to cope with.

“They sat me in front of a scope and it was an eye-opening experience for me to see … from a battle manager standpoint, what it is they go through,” he said. “If we successfully get to the point where we’ve got a good human-machine team arrangement, how valuable that could be.”

 •  0 comments  •  flag
Share on Twitter
Published on September 26, 2025 09:05
No comments have been added yet.


Douglas A. Macgregor's Blog

Douglas A. Macgregor
Douglas A. Macgregor isn't a Goodreads Author (yet), but they do have a blog, so here are some recent posts imported from their feed.
Follow Douglas A. Macgregor's blog with rss.