More on this book
Community
Kindle Notes & Highlights
by
Parmy Olson
Read between
February 7 - May 14, 2025
Despite his investments in the future of humankind, he was cultivating a kind of mental and emotional divide between himself and other people.
Figuring out a responsible form of stewardship for AI was becoming fraught for tech companies. Different objectives were on track to crash into one another, driven by an almost religious zealotry on one side and an unstoppable hunger for commercial growth on the other.
The meeting went well, but then the founders got some surprising news from Google. The company didn’t want its new ethics board to go forward after all. Suleyman was angry, since he’d pushed for the board’s establishment. Part of Google’s explanation at the time was that some of the board’s key members had conflicts of interest—Musk was potentially backing other AI efforts outside of DeepMind, for instance—and establishing a board just wasn’t legally feasible. To some of the board’s short-lived members, that sounded like baloney. They suspected that in reality, Google just didn’t like the idea
...more
As he kicked off his research at OpenAI, Sutskever and his team focused on making AI models that were as capable as possible, not necessarily as equitable, fair, or private.
It was clear that Musk was also chronically unreliable. He had promised to donate $1 billion to OpenAI over several years, but instead had put in somewhere between $50 and $100 million— a rounding error for the world’s richest worrier about AI.
Now as Altman fought to stay alive, he was going to knock down some of those guardrails. The cautious approach he’d started with was going to morph into something more reckless, and doing so would transform the AI field that he and DeepMind had been working in from a slow and largely academic pursuit into something more like the Wild West. Altman would use his ability to spin a compelling narrative to justify the departure he was about to take from OpenAI’s founding principles. He was a tech founder, and tech founders had to pivot sometimes. That was how it worked in Silicon Valley. He would
...more
Google was so desperate to get back into the Chinese market that it also reversed some of its previous resistance to Beijing’s demands on censorship and even surveillance. According to a memo that was leaked to The Intercept in 2018, Google executives had ordered its engineers to work on a prototype search engine for China codenamed Dragonfly, which blacklisted certain search terms and linked people’s searches to their mobile numbers. It was backtracking on its principles to help an oppressive regime surveil its citizens.
The companies are incentivized to keep us as addicted as possible to their platforms, since that generates more ad dollars.
Americans are so addicted to Facebook, Instagram, and other social media apps that they checked their phones 144 times a day on average in 2023, according to one study.
Facebook’s algorithms supercharged the spread of hateful content against the Rohingya so much that it fueled the Myanmar military in its genocidal campaign to kill, torture, rape, and displace the Muslim ethnic group in the thousands, according to a report by Amnesty International.
This wouldn’t be the first time large companies had distracted the public while their businesses swelled. In the early 1970s, the plastic industry, backed by oil companies, began to promote the idea of recycling as a solution to the growing problem of plastic waste.
Its famous “Crying Indian” ad aired on Earth Day in 1971 and encouraged people to recycle their bottles and newspapers to help prevent pollution. If they didn’t, they were guilty of showing flagrant disregard for the environment.
Recycling is not a bad thing per se. But by promoting the practice, the industry could argue that plastics weren’t inherently bad so long as they were recycled properly, which shifted the perception of responsibility from producers to consumers.
News publications, consumers, and policymakers in Washington spent more time talking about how to do more recycling than they did about regulating actual plastic production by companies.
By creating gaps in the data to train her own system, she had encoded it with all kinds of biases, including ones that dismissed the loss of human life.
When she warned managers at meetings about some of the potential problems their AI systems could introduce, she’d get emails from the Human Resources Department telling her to be more collaborative.
Silicon Valley tended to measure success with two metrics: how much money you had raised from investors, and how many people you had hired.
Given how disastrous or triumphant that end result could be, how they built AGI seemed insignificant by comparison. The end result was what mattered. OpenAI’s staff came to believe that they had a moral prerogative to create AGI first and shepherd its spoils to the world, in spite of what the nonprofit said in its charter about collaborating with others.
Some felt that if scientists at DeepMind or in China created AGI first, they were more likely to create some sort of devil.
For those who worked at OpenAI—and at DeepMind, too—the relentless focus on saving the world with AGI was gradually creating a more extreme, almost cultlike environment.
Second, the goal of AGI was more important than the specific means of getting there. Maybe they’d have to break some promises along the way, but humanity would be better off for it in the end.
Not only was their employer more likely to remain solvent, there was now a greater opportunity for them to reap the financial rewards of the large investments—instead of donations—that would come their way.
They had bought into the notion that the benefits of reaching AGI outweighed any scruples about how they might get there.
Ironically, two years after Amodei had complained about OpenAI’s commercial ties with Microsoft, he would take more than $6 billion in investment from Google and Amazon, aligning himself with both companies. It turned out that in this new world where building AGI required near bottomless resources, people didn’t say no to the tech conglomerates.
He didn’t want to force the panelists scrutinizing DeepMind’s health division to sign gag orders, so they could criticize the company freely and publicly if they wanted. But that also meant they weren’t privy to the full extent of DeepMind’s work, which often put them in the dark. And since their judgments weren’t legally binding, the board members complained they lacked teeth. In practice, the board couldn’t do very much.
If you took a step back and looked at what Hassabis and Suleyman had been trying to do all these years, it looked a lot like they’d succumbed to seller’s remorse.
This happened a great deal in tech and, in many cases, saw founders become aghast at how an acquiring company had skewed their original mission. The founders of WhatsApp, for instance, had been adamant for years that their messaging app would be private and never show ads, putting all messages sent on its network under heavy encryption.
Jan Koum had grown up in communist Ukraine, where phones were routinely tapped, and he had a note taped to his desk, written by his cofounder Brian Acton, that read “No Ads! No Games! No Gimmicks!” But after selling to Facebook for $19 billion, Koum and Acton found themselves having to compromise their earlier standards on privacy. At one point, for instance, they updated their policies so that people’s WhatsApp accounts could be linked behind the scenes with their Facebook profiles. An angry clash between Acton and Facebook’s executives ensued and he eventually qu...
This highlight has been truncated due to consecutive passage length restrictions.
Although the larger company had signed a term sheet offering DeepMind $16 billion over ten years to run independently, that document was not legally binding.
The AI peace broker Reid Hoffman had tried talking the DeepMind founders into sticking with Google and the status quo. He had seen the thick drafts from lawyers that sketched out what their new company would look like, and noted the hundreds of hours that Suleyman and Hassabis were putting into the effort, and he saw right away that they were banging their heads against a brick wall.
There was also little doubt now that Google had strung along DeepMind’s founders, perhaps intentionally from the start. “It was a five-year suffocation strategy to dangle the carrot but never grant it,” says a former senior manager. “They let us grow larger and larger and become more and more dependent on them. They played us.” The founders of DeepMind failed to realize what was happening until it was too late. The political luminaries who’d agreed to be independent directors of the new DeepMind were told, with some embarrassment, that the project was off.
The reason is simple. When algorithms are designed to recommend controversial posts that keep your eyeballs on the screen, you are more likely to gravitate toward extreme ideas and the charismatic political candidates who espouse them.
Why has so much money gone to engineers tinkering on larger AI systems on the pretext of making them safer in the future, and so little to researchers trying to scrutinize them today? The answer partly comes down to the way Silicon Valley became fixated on the most efficient way to do good and the ideas spread by a small group of philosophers at Oxford University, England.
Soon after FTX’s collapse, Bankman-Fried gave a remarkable interview with news site Vox: “So the ethics stuff—mostly a front?” the reporter asked. “Yeah,” Bankman-Fried replied. “You were really good at talking about ethics, for someone who kind of saw it all as a game with winners and losers,” the reporter noted. “Ya,” said Bankman-Fried. “Hehe. I had to be.”
Bankman-Fried could rationalize his duplicity because he was working toward a bigger goal of maximizing human happiness. Musk could wave off his own inhumane actions, from baselessly calling people pedophiles on Twitter to alleged widespread racism at his Tesla factories, because he was chasing bigger prizes, like turning Twitter into a free speech utopia and making humans an interplanetary species. And the founders of OpenAI and DeepMind could rationalize their growing support for Big Tech firms in much the same way. So long as they eventually attained AGI, they would be fulfilling a greater
...more
Altman and Hassabis had started their companies with grand missions to help humanity, but the true benefits they had brought to people were as unclear as the rewards of the internet and social media. More clear were the benefits they were bringing to Microsoft and Google: new, cooler services and a foothold in the growing market for generative AI.
Evidence suggests that computers have already offloaded some of our cognitive skills in areas like short-term memory. In 1955, a Harvard professor named George Millar tested the memory limits of humans by giving his subjects a random list of colors, tastes, and numbers. When he asked them to repeat as many things on the list as they could, he noticed that they were all getting stuck somewhere in the neighborhood of seven. His paper, “The Magical Number Seven, Plus or Minus Two,” went on to influence how engineers designed software and how telephone companies broke down phone numbers into
...more
we’ve outsourced our memory to the company and inadvertently weakened our short-term memory skills.
Historically, when robots and algorithms replaced jobs done by human workers, wage growth fell, says MIT economist Daron Acemoglu, who coauthored a book about technology’s influence on economic prosperity, called Power and Progress. He calculates that as much as 70 percent of the increase in wage inequality in the United States between 1980 and 2016 was caused by automation.
Acemoglu says. “To the extent that generative AI follows the same direction as other automation technologies … it may have some of the same implications.”
The European Union looked at AI more pragmatically than the United States, thanks in part to having few major AI companies on its shores to lobby its politicians, and they refused to be influenced by alarmism.
“One thing that Sam does really well is put just-barely believable statements out there that get people talking,” says one former OpenAI manager. “It does so much for OpenAI to be perceived as this global good company that will lead to tons of prosperity. That really helps them with regulators. But if you go look at what they’re building, it’s just a language model.”
The fifteenth-century invention of the printing press had led to an explosion of knowledge, but it also granted new powers to anyone who could afford to produce pamphlets and books to shape public opinion. And while railroads boosted commerce, they also expanded the political sway of railroad magnates, allowing their companies to act like monopolies and exploit workers. For all the prosperity and convenience that some of the world’s greatest innovations have brought, they also gave rise to new regimes that reshaped society in ways both good and bad.
They weren’t making the threat entirely out of loyalty to Altman either. A bigger issue was that Atman’s firing had killed a chance for many OpenAI staff—especially long-serving ones—to become millionaires. The company had been weeks away from selling employee shares to a major investor that would have valued OpenAI at about $86 billion.
Altman had gambled that he could have the best of both worlds, running a business with a philanthropic mission to save the world. As he’d written ten years earlier, the most successful start-up founders “create something closer to a religion.” What he didn’t anticipate was how much people would actually believe in it.
But the dramatic events of November 2023 also destroyed the mirage of accountability that Altman claimed to have hanging over him. He had publicly lauded the fact that a board could fire him, but in reality, it couldn’t. The two female directors who stood up to Altman, Toner and McCauley, were the ones who ended up being forced to leave. They also got the most flak on social media for weeks afterward while their fellow male mutineers, Sutskever and D’Angelo, kept their reputations and roles largely intact. D’Angelo remained on the board, and while Sutskever relinquished his seat, he kept a
...more
The most transformative technology in recent history was being developed by handfuls of people who were turning a deaf ear to its real-world side effects, who struggled to resist the desire to win big. The real dangers weren’t so much from AI itself but from the capricious whims of the humans running it.
But as 2024 wore on, none were forthcoming. A study by scientists at Stanford University concluded there was a “fundamental lack of transparency in the AI industry.”
Millions of data workers around the world were performing such tasks, often under challenging working conditions in countries like India, the Philippines, and Mexico.
We don’t know how racial and gender stereotypes will evolve in a future when more of the internet’s content is generated by machines. Latanya Sweeney, a government and technology professor at Harvard University, estimates that in the coming years, 90 percent of words and images on the web will no longer be created by humans. Most of what we see will be AI-generated.