Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI
Rate it:
Open Preview
Read between September 17 - September 28, 2025
1%
Flag icon
Over 150 of the interviews were with more than 90 current or former OpenAI executives and employees, and a handful of contractors who had access to detailed documentation of parts of OpenAI’s model development practices.
1%
Flag icon
On Friday, November 17, 2023, around noon Pacific time, Sam Altman, CEO of OpenAI, Silicon Valley’s golden boy, avatar of the generative AI revolution, logged on to a Google Meet to see four of his five board members staring at him. From his video square, board member Ilya Sutskever, OpenAI’s chief scientist, was brief: Altman was being fired. The announcement would go out momentarily. Altman was in his room at a luxury hotel in Las Vegas to attend the city’s first Formula One race in a generation,
1%
Flag icon
Murati read off a few more questions. How did this affect the relationship with Microsoft? Microsoft, OpenAI’s biggest backer and exclusive licensee of its technologies, was the sole supplier of its computing infrastructure. Without it, all the startup’s work—performing research, training AI models, launching products—would grind to a halt.
1%
Flag icon
As rumors continued to proliferate, word arrived that three more senior researchers had quit the company: Jakub Pachocki and Szymon Sidor, early employees who had among the longest tenures at OpenAI, and Aleksander Mądry, an MIT professor on leave who had joined recently.
1%
Flag icon
Also that night, the board and the remaining leadership at the company were holding a series of increasingly hostile meetings. After the all-hands, the false projection of unity between Sutskever and the other leaders had collapsed. Many of the executives who had sat next to Sutskever during the livestream had been nearly as blindsided as the rest of the staff, having learned of Altman’s dismissal moments before it was announced.
2%
Flag icon
OpenAI was not like a normal company, its board not like a normal board. It had a unique structure that Altman had designed himself, giving the board broad authority to act in the best interest not of OpenAI’s shareholders but of its mission: to ensure that AGI, or artificial general intelligence, benefits humanity. Altman had long touted the board’s ability to fire him as its most important governance mechanism.
2%
Flag icon
The drama highlighted one of the most urgent questions of our generation: How do we govern artificial intelligence?
2%
Flag icon
OpenAI had presented itself as a bold experiment in answering this question. It was founded by a group including Elon Musk and Sam Altman, with other billionaire backers like Peter Thiel,
2%
Flag icon
Musk and Altman, who had until then both taken more hands-off approaches as cochairmen, each tried to install himself as CEO. Altman won out. Musk left the organization in early 2018 and took his money with him. In hindsight, the rift was the first major sign that OpenAI was not in fact an altruistic project but rather one of ego.
2%
Flag icon
I arrived at OpenAI’s offices for the first time shortly thereafter, in August 2019. After three days embedded among employees and dozens of interviews, I could see that the experiment in idealistic governance was unraveling.
3%
Flag icon
Under the hood, generative AI models are monstrosities, built from consuming previously unfathomable amounts of data, labor, computing power, and natural resources.
3%
Flag icon
the benefits of generative AI mostly accrue upward.
3%
Flag icon
The current AI paradigm is also choking off alternative paths to AI development. The number of independent researchers not affiliated with or receiving funding from the tech industry has rapidly dwindled, diminishing the diversity of ideas in the field not tied to short-term commercial benefit.
3%
Flag icon
much of what our society actually needs—better health care and education, clean air and clean water, a faster transition away from fossil fuels—can be assisted and advanced with, and sometimes even necessitates, significantly smaller AI models and a diversity of other approaches. AI alone won’t be enough, either: We’ll also need more social cohesion and global cooperation, some of the very things being challenged by the existing vision of AI development.
3%
Flag icon
Demis Hassabis, the professorial CEO of the London-based AI lab DeepMind Technologies.
3%
Flag icon
In late 2013, when Musk learned that Google would acquire DeepMind, he was convinced that such a union would end very badly. Publicly, he warned that if Google gave a hypothetical AGI an objective to maximize profits, the software could seek to take out the company’s competitors at any cost. “Murdering all competing A.I. researchers as its first move strikes me as a bit of a character flaw,” Musk told The New Yorker.
4%
Flag icon
For years afterward, Musk would regularly characterize Hassabis as a supervillain who needed to be stopped.
4%
Flag icon
Superintelligence: Paths, Dangers, Strategies, in which Oxford philosopher Nick Bostrom argues that if AI ever became smarter than humans, it would be difficult to control and could cause an existential catastrophe.
4%
Flag icon
Musk would come to feel like Altman had used him to catapult to prominence. It was an echo of an observation that has followed Altman throughout his life. “You could parachute him into an island full of cannibals and come back in 5 years and he’d be the king,” his mentor, Paul Graham, once famously said. Graham reinforced the point again years later: “Sam is extremely good at becoming powerful.”
4%
Flag icon
Both his media savvy and dealmaking, two pillars of his rise, rest on his remarkable ability to tell a good story. In this Altman is a natural. Even knowing as you watch him that his company would ultimately fail, you can’t help but be compelled by what he’s saying.
5%
Flag icon
Every YC company that succeeded gained the incubator, and Graham, increasing prestige. By the time Altman sold Loopt, the incubator had already seeded several startups that had grown or would soon grow into billion-dollar companies, including Dropbox and Airbnb. YC became the most elite club in the Valley.
5%
Flag icon
He connected people to one another over email with a single word (“meet”) or a single punctuation mark (“?”)—a famous habit of Amazon’s Jeff Bezos—to get a conversation started.
6%
Flag icon
with politicians, taking another page out of Thiel’s book. But where Thiel asserted his wealth to back Republican candidates, pumping tens of millions into their campaigns, Altman grew increasingly involved in politics in the opposite direction, hosting fundraisers and writing checks for Democrats. For a time, the political differences between Thiel and Altman strained their relationship.
7%
Flag icon
In later correspondence, the group acknowledged that they could walk back their commitments to openness once the narrative had served its purpose and as the need arose, such as to avoid bad actors getting their hands on the technology. “As we get closer to building AI, it will make sense to start being less open,” Sutskever raised to the trio in January 2016, shortly after OpenAI launched. “The Open in openAI means that everyone should benefit from the fruits of AI after its [sic] built, but it’s totally OK to not share the science.” “Yup,” Musk responded.
7%
Flag icon
Musk would later recount facing the fury of Larry Page for personally poaching Sutskever. The two didn’t speak much again as their views continued to clash on AI development.
7%
Flag icon
An accounting of the societal impacts of commercializing AI research returned an unsettling scorecard: Automated software being sold to the police, mortgage brokers, and credit lenders were entrenching racial, gender, and class discrimination.
7%
Flag icon
secret company contract with the Pentagon for its program known as Project Maven to develop AI-powered surveillance drones.
7%
Flag icon
Open Phil, as it was called, would fast become the primary funder of catastrophic and existentially related AI safety research.
7%
Flag icon
In reality, for AI systems to even be built, there is very often a hidden human cost.”
8%
Flag icon
By the end of 2020, the Amodei siblings would become so disturbed by what they viewed as Altman’s and OpenAI’s break from its original premise that they would cleave off to form another AI lab, Anthropic, taking critical staff with them and creating a rivalry that would play a pivotal role in the frenzied release of ChatGPT.
8%
Flag icon
The amount of compute is based on three things: the processing power of an individual computer chip, or how many calculations it can crunch per second; the total number of computer chips available; and how long they are left running to perform their calculations.
8%
Flag icon
In 2017, a custom Nvidia server with eight of their best GPUs cost $150,000—a price that would rise roughly with inflation to nearly $195,000 by 2023. In the coming years, OpenAI’s Law was projecting that OpenAI would need thousands, if not tens of thousands, of GPUs to train just a single model. The cost of electricity to power that training would also explode. OpenAI needed more money—not just $1 billion, but billions of dollars to sustain itself in the coming years.
9%
Flag icon
For the first time, OpenAI also spelled out its AGI definition: “highly autonomous systems that outperform humans at most economically valuable work.”
9%
Flag icon
To keep the deal secret from prying eyes, the for-profit entity was also incorporated under the alias SummerSafe LP. The name was a reference to an episode of the cartoon show Rick and Morty where the titular characters, mad scientist Rick and his grandson Morty, leave behind Morty’s older sister Summer for another universe and instruct their car to “keep Summer safe.” The car takes the objective seriously, resorting to extreme and harmful mechanisms of defense, including murdering, paralyzing, and torturing people who approach the vehicle. It was a nod to the potential pitfalls of AI.
9%
Flag icon
Within Microsoft, the investment was framed practically. Whether OpenAI did or didn’t reach AGI wasn’t really their concern. But OpenAI was clearly on the cutting edge, and investing early could finally turn Microsoft into an AI leader—both in software and in hardware—on par with Google.
10%
Flag icon
In Silicon Valley, office design is a kind of currency, a symbol of confidence in the company’s financial future and a way to gain a slim advantage in the competition for top-tier talent.
10%
Flag icon
And as OpenAI defined it, AGI referred to a theoretical pinnacle of AI research: a piece of software that had just as much sophistication, agility, and creativity as the human mind to match or exceed its performance on most (economically valuable) tasks.
10%
Flag icon
It was quite the opposite: persuading talented scientists to focus on problems that necessitated rather simple machine learning solutions, instead of the latest cutting-edge techniques that satisfied their ambitions and looked better on a research résumé. It was also finding the political will to deploy those solutions globally. “Technologies that would address climate change have been available for years, but have largely not been adopted at scale by society,” wrote the Climate Change AI researchers in their white paper.
10%
Flag icon
OpenAI’s challenge would be to build AGI that gave everyone “economic freedom” while allowing them to continue to “live meaningful lives” in that new reality. If it succeeded, it would decouple the need to work from survival.
11%
Flag icon
I would later learn that after my visit, Jack Clark would issue an unusually stern warning to employees on Slack not to speak with me beyond sanctioned conversations. The security guard would receive a photo of me with instructions to be on the lookout if I appeared unapproved on the premises.
Nicolette
They invited her, btw
11%
Flag icon
But Brockman seemed once again unclear about how OpenAI would turn itself into a utility. Perhaps through distributing universal basic income, he wondered aloud, perhaps through something else. He returned to the one thing he knew for certain. OpenAI was committed to redistributing AGI’s benefits and giving everyone economic freedom. “We actually really mean that,” he said.
11%
Flag icon
Since its conception, the development and use of AI has been propelled by tantalizing dreams of modernity and shaped by a narrow elite with the money and influence to bring forth their conception of the technology.
12%
Flag icon
Cade Metz, a longtime chronicler of AI, calls this rebranding the original sin of the field: So much of the hype and peril that now surround the technology flow from McCarthy’s fateful decision to hitch it to this alluring yet elusive concept of “intelligence.” The term lends itself to casual anthropomorphizing and breathless exaggerations about the technology’s capabilities.
13%
Flag icon
At their core, neural networks are calculators of statistics that identify patterns in old data—text, pictures, or videos—and apply them to new data.
13%
Flag icon
for which Harvard professor Shoshana Zuboff would coin a term in 2014: surveillance capitalism. Where industrial capitalism derived value from producing material goods that people wanted to buy, surveillance capitalism, Zuboff argued, treated its users as the product.
13%
Flag icon
In 2023, a group of AI researchers, including Ria Kalluri at Stanford University, William Agnew from the University of Washington, and Abeba Birhane from the Mozilla Foundation, would analyze more than forty thousand computer-vision papers and patents, and note the pervasive use of abstract, detached language to sanitize and normalize the field’s reliance upon mass scraping and extraction. Detailed digital trails of people’s thoughts and ideas on social media were merely “text.” People and vehicles in pictures were merely “objects.” Surveillance was merely “detection.”
13%
Flag icon
In one example I thought was particularly clever, researchers used thousands of YouTube videos of the viral 2016 Mannequin Challenge, where people froze in place as cameras panned and zoomed around them, to train up AI models for processing three-dimensional scenes.
Nicolette
wow
14%
Flag icon
Following in Hinton’s and LeCun’s footsteps, many AI professors began to maintain dual affiliations with a company and university. At scale, the practice began to erode the boundaries of truly independent research.
14%
Flag icon
Neural networks are also highly sensitive to changes in their training data. Feed them a different set of pedestrian images, or a different set of stop sign images, and they will learn a whole new set of associations. But those changes are inscrutable.
14%
Flag icon
Pop open the hood of a deep learning model and inside are only highly abstracted daisy chains of numbers. This is what researchers mean when they call deep learning “a black box.” They cannot explain exactly how the model will behave, especially in strange edge-case scenarios, because the patterns that the model has computed are not legible to humans.
« Prev 1 3