Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI
Rate it:
Open Preview
Read between September 17 - September 28, 2025
36%
Flag icon
Parque de las Ciencias, which operates as a zona franca, a free-trade zone. Some cheekily call it zona America, with a hint of bitterness, for housing mostly American companies that do not pay taxes to the government.
37%
Flag icon
Sitting next to Gary Marcus, Altman won over even one of his most vocal critics. “Let me just add for the record that I’m sitting next to Sam, closer than I’ve ever sat to him except once before in my life,” Marcus said, “and his sincerity…is very apparent physically in a way that just doesn’t communicate on a television screen.” (Marcus would later backtrack his rare show of approval: “I realized that I, the Senate, and ultimately the American people, had probably been played.”)
37%
Flag icon
The same narrative Altman had long used within OpenAI to justify hiding its research and moving as fast as possible was now being expertly wielded to steer the US AI regulatory discussion toward proposals that would avoid holding OpenAI accountable, and in some cases entrench its monopoly.
38%
Flag icon
Washington was only the climax of the US leg of Altman’s policy charm offensive. In March of that year, after tweeting that he planned to travel abroad to meet with users, his trip had evolved into a multicity, multi-continent odyssey to sit for photo ops with seemingly every president in the G20. It now had new branding: Sam Altman’s World Tour.
Nicolette
Interestingly, this was exactly what Zuck did in Careless People
39%
Flag icon
At the end of 2023, The New York Times would sue OpenAI and Microsoft for copyright infringement for training on millions of its articles. OpenAI’s response in early January, written by the legal team, delivered an unusually feisty hit back, accusing the Times of “intentionally manipulating our models” to generate evidence for its argument. That same week, OpenAI’s policy team delivered a submission to the UK House of Lords communications and digital select committee, saying that it would be “impossible” for OpenAI to train its cutting-edge models without copyrighted materials. After the media ...more
39%
Flag icon
Like Sutskever, some researchers began to reference “a bunker” in casual conversations, even imagining a setup similar to Los Alamos: Somewhere out in a remote patch of American desert, an elite team of AI researchers would live and work in secure facilities to protect them from outside threats.
40%
Flag icon
Meanwhile, there were other concerning examples of Altman’s behavior. In March 2023, he had emailed the board without D’Angelo and announced that he believed it was time for D’Angelo to step down. The fact that Quora was designing its own chatbot, Poe, Altman argued, posed a conflict of interest. The assertion felt sudden and dubiously motivated. Toner, McCauley, and D’Angelo had each at times asked Altman inconvenient questions, whether about OpenAI’s safety practices, the strength of the nonprofit, or other topics. With his allies on the board dwindling, Altman seemed to the three to be ...more
45%
Flag icon
On November 9, as the independent directors closed in on a final decision, Sutskever had another call with McCauley. “Sam said, ‘Tasha continues to be very supportive of having Helen step off the board,’ ” he told her. It was a balder-faced lie than Altman had told the first time; McCauley had not had any more exchanges with Altman.
45%
Flag icon
In one of the strangest reporting experiences of my career, the person who responded was just as confused as I was about what was happening. He was also a former employee at OpenAI, but not one who had been involved in writing the letter. He had simply received a copy of the letter in his personal email with zero explanation; when he tried to follow up with questions, he received another mysterious response: a link to the Tor inbox with the phrase capped_profit, which seemed to be a username, followed by what looked like a password.
46%
Flag icon
Then in the Drafts folder, there was an email that hadn’t yet been sent and wasn’t meant to be. It was a message intended for just those logged in to the inbox.
46%
Flag icon
After The Blip, Sutskever never returned to the office. With diminished representation of their concerns on the executive team and the board, OpenAI’s Safety clan was now significantly weakened. By April 2024, with the conclusion of the investigation, many, especially those with the highest p(doom)s, were growing disillusioned and departing. Two of them were also fired, OpenAI said, for leaking information.
46%
Flag icon
As the Safety clan’s numbers depleted, the rest of the company was back to advancing its vision for Her. It now had all the ingredients: global brand recognition, real data on user behaviors from ChatGPT and its other products, and its newly trained model, Scallion.
46%
Flag icon
Scallion would be the first launch happening under a new so-called Preparedness Framework, which OpenAI had released at the end of the previous year. The framework detailed a new evaluation process that the company would use to test for dangerous capabilities, naming the same categories that Altman and the policy white paper had popularized in Washington: cybersecurity threats, CBRN weapons, persuasion, and the evasion of human control.
47%
Flag icon
After The Blip, the board’s phrasing “not consistently candid in his communications” had, as some in OpenAI expected, triggered several investigations from regulators and law enforcement, including one from the US Securities and Exchange Commission into whether company investors had been misled, according to The Wall Street Journal. In the same month, The New York Times filed its copyright infringement lawsuit, which added to a snowballing pile of other lawsuits from artists, writers, and coders over OpenAI’s reaping hundreds of millions, then billions, of dollars from models trained without ...more
48%
Flag icon
Within hours of Leike’s tweets on May 17, another tweet was going viral. Kelsey Piper, a senior writer at Vox for the EA-inspired section Future Perfect, had posted a new story. “When you leave OpenAI, you get an unpleasant surprise,” she wrote in her tweet sharing the scoop, “a departure deal where if you don’t sign a lifelong nondisparagement commitment, you lose all of your vested equity.”
49%
Flag icon
The Omnicrisis could have been a moment for OpenAI to engage in self-reflection. It was a prompt for the company to understand why exactly it had simultaneously lost the trust of employees, investors, and regulators as well as that of the broader public. Only then, maybe, just maybe, it would have begun to realize that both The Blip and the Omnicrisis were one and the same: the convulsions that arise from the deep systemic instability that occurs when an empire concentrates so much power, through so much dispossession, leaving the majority grappling with a loss of agency and material wealth ...more
49%
Flag icon
Aleksander Mądry, the Polish MIT professor who many described as a power seeker, had, in his relatively short tenure, successfully amassed a sizable fiefdom within the company. Mądry didn’t think bringing back Sutskever was a good idea. Sutskever commanded too much admiration and loyalty among researchers. It could take away from Mądry’s influence—as well as the influence of his good friend Pachocki. Within a few hours, Mądry’s concerns had successfully sowed their doubts and fractured leadership.
49%
Flag icon
Six years after my initial skepticism about OpenAI’s altruism, I’ve come to firmly believe that OpenAI’s mission—to ensure AGI benefits all of humanity—may have begun as a sincere stroke of idealism, but it has since become a uniquely potent formula for consolidating resources and constructing an empire-esque power structure.
49%
Flag icon
“The most successful founders do not set out to create companies,” Altman reflected on his blog in 2013. “They are on a mission to create something closer to a religion, and at some point it turns out that forming a company is the easiest way to do so.”
49%
Flag icon
Innovation, modernity, progress—what wouldn’t we pay to achieve them?
50%
Flag icon
Māori principle of kaitiakitanga, or guardianship,
50%
Flag icon
“Data is the last frontier of colonization,” Mahelona told me: The empires of old seized land from Indigenous communities and then forced them to buy it back, with new restrictive terms and services, if they wanted to regain ownership. “AI is just a land grab all over again. Big Tech likes to collect your data more or less for free—to build whatever they want to, whatever their endgame is—and then turn it around and sell it back to you as a service.”
50%
Flag icon
What I reject is the dangerous notion that broad benefit from AI can only be derived from—indeed, will ever emerge from—a vision for the technology that requires the complete capitulation of our privacy, our agency, and our worth, including the value of our labor and art, toward an ultimately imperial centralization project.
51%
Flag icon
As Joseph Weizenbaum, MIT professor and inventor of the ELIZA chatbot, said in the 1960s, “Once a particular program is unmasked, once its inner workings are explained in language sufficiently plain to induce understanding, its magic crumbles away.” I hope this book is just one offering to help induce understanding.
1 3 Next »