Jeff Childers: CHIP OF THE WEST
By Jeff Childers, Substack, 5/31/25
The phrase “AI safety” is becoming a very strategically flexible term of art. This week, SlashGear ran a suggestive story headlined, “Fury: America’s New Superweapon Is A True Technological Marvel.” That much was absolutely true.

Anduril Technologies, the startup behind Fury, is no legacy military contractor like Raytheon or Boeing. The company, which launched like a rocket in 2017, was founded by Palmer Luckey, the teenaged wunderkind who designed the Oculus Rift VR headset. He was born in 1992! In other words, he’s now just 32 years old, and was 24 when he started the company.
Revenge of the Nerds, with a kill switch.
Anduril, which just turned seven, is already valued at $36 billion—just below industry legend Raytheon’s valuation. The young company enjoys unusually close ties with U.S. military leadership, and skips past traditional defense procurement red tape under special programs like Defense Innovation Unit (DIU) and SOFWERX.
In its brief corporate life, Anduril has delivered advanced miltech solutions and secured generous production contracts, particularly for its flagship product, a key bit of software called Lattice OS. The company describes it as “an AI-powered operating system designed to orchestrate autonomous defense systems across land, sea, air, and even cyberspace.”

The operative word is “autonomous.” Lattice isn’t for manually flying drones using a joystick and a VR headset. It’s for issuing goal-oriented commands— like those you would issue to human soldiers. Commands like, monitor this valley, engage anything crossing this perimeter, neutralize radar emitters in zone Alpha, or destroy the Eiffel Tower.
There’s no pilot. There’s just Bob, back at the base, making suggestions.
The article described Fury, Anduril’s latest hardware prototype, a pre-production proof-of-concept. It’s similar to how Tesla both writes its self-driving software and also builds the cars that use it. Except that military-grade AI is obviously far beyond chatbots, self-driving Teslas, or whatever else we experience from consumer-level AI tech.
The SlashGear article introduced Fury as a 20-foot-long autonomous fighter jet —not a drone— capable of climbing to 50,000 feet, hitting Mach 0.95, and sustaining +9 Gs. Designed for air-to-air combat; it’s made to fly and fight by itself.

“Fury,” the article explained, “is a high-performance, multi-mission group 5 autonomous air vehicle (AAV).”
We could pause here to wallow in the well-worn moral murk — the classic handwringing over whether autonomous killing machines are morally ambiguous or whether “AI safety” still applies once your autonomous AI jet is pulling 9 Gs and launching air-to-air missiles. But set that ethical quagmire aside.
My question for today is much simpler: how stupid do they think we are? The answer is, pretty stupid, apparently.
Apparently, it is only minor news to the defense industry —ignored by corporate media— that military AI can be trusted to navigate a $30 million fighter jet in three-dimensional space under combat conditions, but they are also telling us that they can’t figure out how to get your chatbot to open the browser by itself and renew your driver’s license.
It makes less than no sense.
I’m a lawyer, not an AI engineer with a Q-clearance, so obviously I don’t know. But for Heaven’s sake, I can read. If AI can fly jets at supersonic speeds and battle it out in dogfights with other AI fighters, then the technology accessible to the military is light years beyond suggesting a polite way to decline an invitation to your kindergartner’s classmate’s bar mitzvah.
It makes me wonder: Was last week’s breathless “disclosure” of an AI-turned pharma whistleblower real? Or was that just a psyop, designed to convince us that consumer AI tech should be locked down and hobbled for safety? In actual truth, are they intentionally dribbling AI out slowly, to keep our enemies behind the eight ball and maybe to protect our economy from being disrupted too quickly?
In podcast after podcast and conference after conference, they keep warning us about the coming threat of artificial general intelligence — the moment AI becomes smarter than people — while also insisting, over and over, that we’re still years away from that troubling milestone. But isn’t it odd that they only ever talk about consumer AI — chatbots, homework helpers, and virtual therapists — and never speculate about the AI already flying autonomous military aircraft, managing battlefield logistics, or directing drone swarms at the speed of thought?
For the last year — maybe longer — we haven’t seen meaningful progress in consumer chatbot intelligence. Instead, we’ve been dazzled by a parade of low-stakes novelties: talking image generators, dancing avatars, and viral clips of AI-generated cats telling dad jokes in Morgan Freeman’s voice.
It’s not that AI has stopped evolving —clearly not— it’s that we’re being shown the circus, not the control room.
Once you begin wondering what AI level we are really at, recent history begins to make a lot more sense. Aside from the Proxy War in Ukraine, the next-most terrifying conflict was the escalation over the Strait of Taiwan. Starting around 2021, China and the U.S. faced off with naval fleets to fight over the one island where most AI chips are made.
For two years straight, all Nancy Pelosi could talk about was semiconductors. “Chips this, chips that, squaaawk” and she kept flying her broomstick into Taipei like it was spring break for congressional war hawks. CNBC, 2022:

Now, in 2025, President Trump has just declared a new Manhattan Project — not for bombs, but to supercharge our national energy grid and fuel the computing demands of massive new AI data centers.
Make no mistake. The real arms race is no longer nuclear. The real arms race is artificial intelligence. I doubt anyone would bother arguing the point.
Once you realize that AI is the new arms race, recent history stops looking confusing — and starts looking obvious. The Ukraine war dominated headlines. But the real geopolitical near-miss was the 2022 standoff over Taiwan — the one triggered by Nancy Pelosi’s surprise visit to the island. Officially, she was there to support democracy. But every journalist with a press badge knew the real story: the day-drinking day-trader was there to protect the global supply of AI chips.

The chipmaker Pelosi invited war with China to visit was Taiwan Semiconductor (TSMC)— the quiet fabrication engine behind NVIDIA’s GPUs, Apple’s SoCs, and nearly every serious AI training run on Earth.
Congress was already acting. The 2022 CHIPS Act prioritized onshoring domestic chip development with $52 billion in federal funds— and since he took office, President Trump has expanded and accelerated the CHIPS initiative, declaring a national security emergency, allowing faster permitting and easier zoning, and using tariffs to force domestic sourcing of defense-related chips.
It’s working. Taiwan’s TSMC is now building a massive $165+ billion fabrication complex in Phoenix, Arizona. It’s scheduled to come online in phases between 2026–2028. Axios, this month:

Intel, long dormant, is also staging a major chipmaking comeback with new U.S. fabs in Ohio and Arizona — thanks mostly to Trump’s industrial pressure campaign.
None of this is particularly any secret. As far back as 2018, defense rags were accurately predicting current events. In April, 2018 —just after Palmer Luckey founded Anduril— DefenseOne ran this prophetic story:

The prescient analysis, written by defense strategist Elsa B. Kania, warned that the world was already locked into an AI arms race — not just between the U.S. and China, but including Russia, India, Israel, even non-state actors like ISIS, who were using commercial drones to deliver battlefield intelligence.
Back then, the military’s Project Maven had just launched. The Pentagon’s Joint Artificial Intelligence Center (JAIC) didn’t exist yet. ChatGPT wasn’t even a glimmer in the public’s eye.
Kania called it “more than” an arms race because — unlike nuclear missiles — AI isn’t a discrete, singular weapon system. It’s a general-purpose technology, like electricity, or the steam engine, capable of transforming every aspect of military power: cybersecurity, battlefield decision-making, electronic warfare, logistics, surveillance, and strategic planning.
In other words, AI doesn’t just change what militaries do — it changes how they think. And that means traditional “arms race” metaphors break apart. Kania argued that framing the AI revolution purely in “weapons race” terms missed the bigger picture— that AI will become the nervous system of every future military, not just its weapons lab.
I can’t emphasize this enough: years before ChatGPT suggested possible recipes for the three overripe vegetables left in the fridge, the military was accurately forecasting the future arms race (more than). Which means that, in 2018, they must have already enjoyed enough operational AI capability to know where we were headed.
Perhaps a better question is: why did they let us have ChatGPT at all? Whatever the reason, OpenAI did not create the AI revolution. It was a relatively late player.
Tech bro Palmer Luckey named his billion-dollar startup Anduril and it wasn’t an accident. The name refers to J.R.R. Tolkien’s The Lord of the Rings and it carries a heavy thematic payload.
In Lord of the Rings canon, Andúril means “Flame of the West,” which was the reforged sword of Elendil, later wielded by Aragorn, the rightful king of Gondor. Andúril was reforged from the shards of Narsil, another legendary sword that sliced the One Ring from Sauron’s hand.

In short, Andúril was a weapon, a weapon of ancient power, reforged in modern hands to reclaim rightful dominion.
“Anduril” wasn’t just branding. It was mission signaling. Naming the company Anduril signaled mythic ambition, restoration of lost power, and righteous moral framing. It’s a civilizational project. Palmer sees himself as rebuilding America’s lost military edge— like Aragorn returning to reclaim his throne. And it suggests Palmer’s team sees itself as the good guys, wielding dangerous power to combat evil.
What does it all mean? It means that we regular folks aren’t witnessing the rise of AI. We’re witnessing its containment.
For the past year, the public discussion has been fixated on the wrong question. Talking heads fret over whether ChatGPT might say something offensive, or whether Midjourney might draw the wrong number of fingers. We are told that AI isn’t quite ready — it’s potentially dangerous, often unpredictable, hallucinates too much, and is a bit too quirky for real work. They claim we’re years away from so-called artificial general intelligence, and that “alignment” must come first.
Meanwhile, military-grade AI is flying 9G fighter jets.
This is not any kind of conspiracy theory. None of this is secret. The defense journals were writing about the AI revolution back in 2018, and even earlier, well before consumer AI hit the scene. Along with his venture capital partners, Palmer Luckey invested billions in writing an AI operating system — in 2017!
Back then, defense analyst Elsa Kania warned not that we were entering an AI arms race — she said we were already in one. She accurately labeled it “more than” an arms race that would reshape every dimension of military, economic, and political power. And that is just what is happening.
Anduril Industries, founded in 2017 by a 24-year-old Palmer Luckey, wasn’t predicting the future — he was building out the present. The firm’s software platform, Lattice OS, isn’t a helpful chatbot. It’s a battlefield operating system for managing fully autonomous weapons across land, sea, air, and space. The new aircraft, Fury, is a fully autonomous fighter jet. Not merely a prototype — it’s a fully functional, AI-based weapons system.
Don’t misunderstand: I am not complaining about consumer AI’s throttling, not really. It seems logical on many levels. For one thing, the economy needs time to absorb what’s coming. And I also get that we don’t need China stealing weapons-grade AI from Microsoft Word’s Copilot.
But it is aggravating that the AI conversation itself has been nerfed and dumbed down, with the enthusiastic participation of useless corporate media that consistently obscures the true issues, and instead runs ridiculously superficial articles mocking small AI mistakes in MAHA reports. The AI that flies 9G fighter jets doesn’t make those kinds of easy errors. Just the versions that we get.
And if we can’t honestly debate AI, how can we participate in deciding who wields Andúril?