Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI
Rate it:
Open Preview
Read between August 16 - August 21, 2025
2%
Flag icon
development would demand extraordinary amounts of money. Musk and Altman, who had until then both taken more hands-off approaches as cochairmen, each tried to install himself as CEO. Altman won out. Musk left the organization in early 2018 and took his money with him. In hindsight, the rift was the first major sign that OpenAI was not in fact an altruistic project but rather one of ego.
2%
Flag icon
Over the next four years, OpenAI became everything that it said it would not be. It turned into a nonprofit in name only, aggressively commercializing products like ChatGPT and seeking unheard-of valuations. It grew even more secretive, not only cutting off access to its own research but shifting norms across the industry to bar a significant share of AI development from public scrutiny. It triggered the very race to the bottom that it had warned about, massively accelerating the technology’s commercialization and deployment without shoring up its harmful flaws or the dangerous ways that it ...more
3%
Flag icon
Through my reporting, I’ve come to understand two things: Artificial intelligence is a technology that takes many forms. It is in fact a multitude of technologies that shape-shift and evolve, not merely based on technical merit but with the ideological drives of the people who create them and the winds of hype and commercialization.
3%
Flag icon
Under the hood, generative AI models are monstrosities, built from consuming previously unfathomable amounts of data, labor, computing power, and natural resources.
3%
Flag icon
As Baidu raced to develop its ChatGPT equivalent, employees working to advance AI technologies for drug discovery had to suspend their research and cede their computer chips to develop the chatbot instead. The current AI paradigm is also choking off alternative paths to AI development. The number of independent researchers not affiliated with or receiving funding from the tech industry has rapidly dwindled, diminishing the diversity of ideas in the field not tied to short-term commercial benefit.
3%
Flag icon
So much of what our society actually needs—better health care and education, clean air and clean water, a faster transition away from fossil fuels—can be assisted and advanced with, and sometimes even necessitates, significantly smaller AI models and a diversity of other approaches. AI alone won’t be enough, either: We’ll also need more social cohesion and global cooperation, some of the very things being challenged by the existing vision of AI development.
3%
Flag icon
Policymakers can implement strong data privacy and transparency rules and update intellectual property protections to return people’s agency over their data and work. Human rights organizations can advance international labor norms and laws to give data labelers guaranteed wage minimums and humane working conditions as well as to shore up labor rights and guarantee access to dignified economic opportunities across all sectors and industries. Funding agencies can foster renewed diversity in AI research to develop fundamentally new manifestations of what this technology could be. Finally,
4%
Flag icon
“He literally made a video game where an evil genius tries to create AI to take over the world,” Musk shouted, referring to Hassabis’s 2004 title Evil Genius, “and fucking people don’t see it. Fucking people don’t see it! And Larry? Larry thinks he controls Demis but he’s too busy fucking windsurfing to realize that Demis is gathering all the power.”
4%
Flag icon
Musk would come to feel like Altman had used him to catapult to prominence. It was an echo of an observation that has followed Altman throughout his life. “You could parachute him into an island full of cannibals and come back in 5 years and he’d be the king,” his mentor, Paul Graham, once famously said. Graham reinforced the point again years later: “Sam is extremely good at becoming powerful.”
4%
Flag icon
Jerry was a people person. He had a passion for affordable housing and worked on several commercial and residential projects that sought to foster community and revitalize St. Louis. Sam would later repeat one of the biggest lessons his father taught him: “You always help people—even if you don’t think you have time, you figure it out.”
5%
Flag icon
to his formula, people say, is the combination of his remarkable listening skills, his willingness to help, and his ability to frame whatever he has to offer in terms of exactly what you want.
10%
Flag icon
In December, Climate Change AI would host a packed gathering at NeurIPS, the yearly AI research conference, a day after another group held a different well-attended workshop down the hall in a room the size of a football field about machine learning for health care research. The talks and the posters lining the walls showcased a plethora of applications, including the use of computer vision to detect the early, near-imperceptible stages of diseases like Alzheimer’s in medical image scans, and the use of speech recognition to help patients with vocal impediments to communicate more easily. The ...more
10%
Flag icon
offered my own example: Speaking of climate change, what about the environmental impact of AI itself? A recent study from the University of Massachusetts Amherst had placed alarming numbers on the huge and growing carbon emissions of training larger and larger AI models. That was “undeniable,” Sutskever said, but the payoff was worth it because AGI would, “among other things, counteract the environmental cost specifically.” He stopped short of offering examples. “It
11%
Flag icon
All the while, those who profited from the cotton gin painted the invention as one that made the enslaved happier. “I say it boldly, there is not a happier, more contented race upon the face of the earth,” said one South Carolina congressman. These two features of technology revolutions—their promise to deliver progress and their tendency instead to reverse it for people out of power, especially the most vulnerable—are perhaps truer than ever for the moment we now find ourselves in with artificial intelligence.
12%
Flag icon
Artificial intelligence as a name also forged the field’s own conceptions about what it was actually doing. Before, scientists were merely building machines to automate calculations, not unlike the large hulking apparatus, as portrayed in The Imitation Game, that Turing made to crack the Nazi Enigma code during World War II. Now, scientists were re-creating intelligence—an idea that would define the field’s measures of progress and would decades later birth OpenAI’s own ambitions. But the central problem is that there is no scientifically agreed-upon definition of intelligence. Throughout ...more
12%
Flag icon
OpenAI is the poster child for this line of thought. It cannot say how the technology will deliver on these promises—only that the staggering price society needs to pay for what it is developing will someday be worth it. What’s left unsaid is that in a vacuum of agreed-upon meaning, “artificial intelligence” or “artificial general intelligence” can be whatever OpenAI wants.
12%
Flag icon
are depressed. There was really nothing much intelligent about it. He later published a tome called Computer Power and Human Reason in the decade following Minsky’s Perceptrons that argued that humans and machines are different and the AI field’s attempt to blur that distinction would lead to profound societal consequences. It would, for example, allow people in power—whether CEOs or politicians—to execute their will through machines while absolving themselves of moral responsibility.
13%
Flag icon
Neural networks, meanwhile, come with a different trade-off. For years the field has aggressively debated whether such connectionist software can do what the symbolic ones can: store information and reason. Regardless of the answer, it has become clear that if they can, they do so inefficiently. Only with extraordinary amounts of data and computational power have neural networks even begun to have the kinds of behaviors that may suggest the emergence of either property. That said, one area where deep learning models really shine is how easy it is to commercialize them. You do not need ...more
13%
Flag icon
normalize the field’s reliance upon mass scraping and extraction. Detailed digital trails of people’s thoughts and ideas on social media were merely “text.” People and vehicles in pictures were merely “objects.” Surveillance was merely “detection.”
13%
Flag icon
With new awareness, I began to notice how the aggressive push to collect more training data was leading to pervasive surveillance not just in the digital world but the physical one as well. I noticed, too, how the gaze of that physical surveillance seemed to repeatedly fall on already vulnerable populations, including children or historically marginalized groups, even more so in developing countries. That year, I stumbled across a Massachusetts-based, Harvard-incubated startup selling AI-powered headbands that said it could measure a student’s brain wave activity to tell a teacher whether or ...more
13%
Flag icon
As I recounted this worry to a colleague, she introduced me to a phrase that had already been coined for the phenomenon: “data colonialism.” I discovered the work of scholars Nick Couldry and Ulises A. Mejias, whose foundational text The Costs of Connection, published just that year, argued that Silicon Valley’s pervasive datafication of everything was leading to a return of disturbing historical patterns of conquest and extractivism.[*] The following year, a paper called “Decolonial AI” from Shakir Mohamed and William Isaac at DeepMind and Marie-Therese Png at the University of Oxford ...more
13%
Flag icon
Thami Nkosi, who was born and raised in one of the poorest neighborhoods in Johannesburg, which used to be a chemical waste dump for the mining industry. He showed me the thousands of cameras dotting the city’s sprawling streets and described to me the ways it was restricting the movements of Black people, already squeezed by the racial legacies of apartheid and in fear of being criminalized, simply for being Black in a white neighborhood. “They’re essentially monetizing public spaces and public life,” Nkosi said.
14%
Flag icon
Following in Hinton’s and LeCun’s footsteps, many AI professors began to maintain dual affiliations with a company and university. At scale, the practice began to erode the boundaries of truly independent research.
14%
Flag icon
Universities could no longer afford the computer chips or the electricity needed to work in the hottest areas of AI development.
14%
Flag icon
in just three years, from 2017 to 2020, industry-affiliated models grew from 62 percent to a whopping 91 percent of the world’s best-performing AI models.
14%
Flag icon
Neural networks have shown, for example, that they can be unreliable and unpredictable. As statistical pattern matchers, they sometimes home in on oddly specific patterns or completely incorrect ones. A deep learning model might recognize pedestrians only by the crosswalks underneath them and fail to register a person who is jaywalking. It might learn to associate a stop sign with being on the side of the road and miss the same sign extended on the side of a school bus or being held by a crossing guard. Neural networks are also highly sensitive to changes in their training data. Feed them a ...more
14%
Flag icon
In 2019, white hat hackers tricked a Tesla in self-driving mode into veering into an incoming lane of traffic. All they did was place a series of tiny stickers on the road to fool the car’s deep learning model into misfiring and registering the wrong lane as the right one.
14%
Flag icon
2024, researchers at Peking University and several other universities, including University College London, found that the most up-to-date models now had relatively matched performance for pedestrians with different skin colors but were more than 20 percent less accurate at detecting children than adults, because children had been poorly represented in the models’ training data.
14%
Flag icon
In fact, deep learning models are inherently prone to having discriminatory impacts because they pick up and amplify even the tiniest imbalances present in huge volumes of training data. It’s not just a problem when a demographic is poorly represented, but when it’s overrepresented as well.
14%
Flag icon
“The human brain has about 100 trillion parameters, or synapses,” Hinton told me in 2020. “What we now call a really big model, like GPT-3, has 175 billion. It’s a thousand times smaller than the brain.
14%
Flag icon
Forever stuck in the realm of correlations, neural networks would never, with any amount of data or compute, be able to understand causal relationships—why things are the way they are—and thus perform causal reasoning. This critical part of human cognition is why humans need only learn the rules of the road in one city to be able to drive proficiently in many others, Marcus argued. Tesla’s Autopilot, by contrast, can log billions of miles of driving data and still crash when encountering unfamiliar scenarios or be fooled with a few strategically placed stickers. Marcus advocated instead for ...more
14%
Flag icon
bigger root issue has been the whittling down and weakening of a scientific environment for robustly exploring that possibility and other alternatives to deep learning.
14%
Flag icon
Generative AI, the product of OpenAI’s vision, could not have emerged without the first era of AI commercialization. Generative AI models are deep learning models trained to generate reproductions of their data inputs. From old text, they learn to synthesize new text; from old images, they learn to synthesize new images. But to do so at high-enough fidelity to become humanlike, which OpenAI says is key in its quest for AGI, they are trained on more data and compute than have ever been used before. Generative AI is thus the maximalist form of deep learning.
15%
Flag icon
In the end, Moore’s Law was not based on some principle of physics. It was an economic and political observation that Moore made about the rate of progress that he could drive his company to achieve, and an economic and political choice that he made to follow it. When he did, Moore took the rest of the computer chip industry with him, as other companies realized it was the most competitive business strategy. OpenAI’s Law, or what the company would later replace with an even more fevered pursuit of so-called scaling laws, is exactly the same. It is not a natural phenomenon. It’s a ...more
15%
Flag icon
The intelligence of different species was correlated with the size of their biological brains, he’d say. Thus, if nodes were like neurons, he argued, advancements in digital intelligence should emerge by scaling simple neural networks to have more and more nodes.
15%
Flag icon
August 2017, that changed with Google’s invention of a new type of neural network known as the Transformer. Transformers excel at picking up long-range patterns.
16%
Flag icon
How could it benefit all of humanity when it lacked meaningful global representation? Even as a European coming from a highly overlapping culture to the US, he often felt alienated by the overwhelming bias in AI safety and other discussions toward American values and American norms.
17%
Flag icon
Nest expanded the data by adding an even broader scrape of links shared on Reddit as well as a scrape of English-language Wikipedia and a mysterious dataset called Books2, details of which OpenAI has never disclosed, but which two people with knowledge of the dataset told me contained published books ripped from Library Genesis, an online shadow repository of torrented books and scholarly articles. In 2023, the Authors Guild and seventeen authors, including George R. R. Martin and Jodi Picoult, would sue OpenAI and Microsoft alleging mass copyright infringement. OpenAI would respond in March ...more
17%
Flag icon
When this still wasn’t enough, OpenAI employees also gathered whatever they could find on the internet, scraping links shared on Twitter, transcribing YouTube videos, and cobbling together a long tail of other content, including from niche blogs, existing online data dumps, and a text storage site called Pastebin.
18%
Flag icon
In a 2023 paper, Abeba Birhane and her coauthors would introduce the concept of “hate scaling laws” to critique the premise of training deep learning models on unfiltered data, or what they called “data-swamps.” They analyzed two publicly available image-and-text datasets used to train open-source image generators, LAION-400M and LAION-2B-en, both pulled from Common Crawl, with four hundred million and two billion images, respectively. They showed that the amount of hateful and abusive content scaled with the size of the dataset and exacerbated the discriminatory behaviors of the models ...more
20%
Flag icon
GPT-3, released one year after Strubell’s paper, now topped them. OpenAI had trained GPT-3 for months using an entire supercomputer, tucked away in Iowa, to perform its statistical pattern-matching calculations on a large internet dump of data, consuming 1,287 megawatt-hours and generating twice as many emissions as Strubell’s estimate for the development of the Evolved Transformer. But these energy and carbon costs wouldn’t be known for nearly a year. OpenAI would initially give the public one number to convey the sheer size of the model: 175 billion parameters, over one hundred times the ...more
20%
Flag icon
In 2017, a Facebook language model had mistranslated a Palestinian man’s post that said “good morning” in Arabic to “attack them” in Hebrew, leading to his wrongful arrest.
20%
Flag icon
She fired off a second email, this time more piercing. She called out her colleagues for ignoring her and emphasized how dangerous it was to have a large language model trained on Common Crawl, which included online internet forums such as Reddit. As a Black woman, she never spent time on Reddit precisely because of how badly the community harassed Black people, she said. What would it mean for GPT-3 to absorb and amplify that toxic behavior? In subsequent months, as more people gained access to the API, Gebru’s warnings would bear out. People would post myriad examples online of GPT-3 ...more
21%
Flag icon
harder to eradicate toxicity or more broadly ensure that they reflected evolving social norms and values.
21%
Flag icon
The request was a dramatic aberration from the way Google and the rest of the industry handled research. Like many labs at other companies, Google Brain had until then largely conducted itself as an academic operation and given researchers wide latitude to pursue the questions they wanted to. At times, the company reviewed papers to ensure they didn’t expose sensitive IP or customer data. But researchers like Gebru had never known the company to block or retract a paper simply for shedding light on inconvenient truths.
22%
Flag icon
2023, Stanford researchers would create a transparency tracker to score AI companies on whether they revealed even basic information about their large deep learning models, such as how many parameters they had, what data they were trained on, and whether there had been any independent verification of their capabilities. All ten of the companies they evaluated in the first year, including OpenAI, Google, and Anthropic, received an F; the highest score was 54 percent. With this sharp reversal in transparency norms, the most alarming consequence would be the erosion of scientific integrity. The ...more
23%
Flag icon
“It was sad to me that we deployed this API with our mission of benefiting humanity, and everyone had such positive impressions about how we had users saving time on customer service or whatever,” one former OpenAI employee says, “but in reality, a lot of our traffic was going to AI Dungeon child sexual content and a creepy AI girlfriend product.” —
23%
Flag icon
In a memo, they laid out key critiques to the GitHub project, suggesting Scott reconsider the premise of hoovering up developer data published under a Creative Commons license without consent or compensation, a former staffer remembers. Microsoft, the memo said, should consider canceling the product, or at the very least take a percentage of the product’s profits and give it back to the open-source community. While Scott was receptive, creating the tool and being first to market was his central focus, the staffer says. In the end, Microsoft donated some money to an existing program for ...more
23%
Flag icon
Confusion abounded over whose responsibility it was—OpenAI’s or GitHub’s—to optimize the model into a deployable product. There was also a lack of clarity among OpenAI employees around how much IP they should be sharing with their GitHub counterparts, while GitHub employees struggled with how much to trust OpenAI.
23%
Flag icon
Tools for Humanity’s main product, Worldcoin, was a self-described “collectively owned” cryptocurrency that would allow everyone to eventually get a share of its value. As part of the scheme, the company was developing a dramatic-looking chrome-colored orb—roughly the size of a bowling ball and partly a reflection of Altman’s design tastes—to scan people’s irises and verify their identity before giving them their cut. The iris scanning would be a necessity, the founders argued, once AI also made it increasingly hard to decipher fake media from reality. An extensive investigation from Eileen ...more
« Prev 1 3 4