More on this book
Community
Kindle Notes & Highlights
by
Parmy Olson
Read between
March 23 - March 28, 2025
mulling over the conclusions of a book he had just read by Roger Penrose called Shadows of the Mind. In it, the renowned physicist and mathematician argued that the human mind could perform tasks that no computer ever could. The ideas that Hassabis and others had proposed about the brain being “mechanistic” and a useful inspiration for building AI didn’t hold water, because the human brain was unique. It was virtually impossible to replicate.
“You could argue that a tiger is just a bunch of biochemical reactions, and there’s no point in being afraid of those.” But a tiger is also a collection of atoms and cells that can do plenty of damage if not kept in check. Similarly, AI might just be a collection of advanced
math and computer code, but when put together in the wrong way, it could be incredibly dangerous.
a new field of research called AI alignment, where scientists and philosophers were figuring out how best to “align” artificial intelligence systems with human goals.
Hassabis and the wealthy Estonian spoke again, and Tallinn eventually became one of DeepMind’s first investors alongside Peter Thiel. His goal wasn’t just to make money but to keep an eye on Hassabis’s progress and make sure he didn’t inadvertently create a horrifying, rogue AI. Tallinn saw himself as an evangelist for Yudkowsky’s ideas. He wanted to use his credibility as a deep-pocketed investor to help expose his warnings to the world’s most promising AI builders.
Bostrom had written a book called Superintelligence, and it was causing a stir among people working on AI and frontier technology. In the book, Bostrom warned that building “general” or powerful AI could lead to a disastrous outcome for humans, but he pointed out that it might not necessarily destroy us because it was malevolent or power-hungry. It might just be trying to do its job. For instance, if it was given the task of making as many paper clips as possible, it might decide to convert all of Earth’s resources and even humans into paper clips as the most effective way to fulfill its
...more
bought Instagram for $1 billion in what would become a masterstroke of social media consolidation. And he was just months away from paying an eye-watering $19 billion to the founders of WhatsApp.
Facebook made about 98 percent of its money from selling ads, but to sell more advertisements and keep growing, Zuckerberg needed
people to spend more and more time on his sites. DeepMind’s dozens of talented AI scientists could help. With smarter recommendation systems that could trawl through the personal data of their users, smarter algorithms behind Facebook and Instagram could show people the right pictures, posts, and videos to keep them scrolling for longer. Zuckerberg offered Hassabis $800 million for DeepMind,
Outwardly, Hassabis told his employees that DeepMind would stay independent for another twenty years. But privately he was tired of fundraising and frustrated that he was only spending a fraction of his time on actual research. Having just rejected a huge offer from Zuckerberg, it was hard to ignore how much money he could make from selling to a company in Silicon Valley, especially now that Big Tech was suddenly salivating over AI.
in 2012. A Stanford AI professor named Fei-Fei Li had created an annual challenge for academics called ImageNet, to which researchers submitted AI models that tried to visually recognize images of cats, furniture, cars, and more. That year, scientist Geoffrey Hinton’s team of researchers used deep learning to create a model that was far more accurate than anything before, and their results stunned the AI field. Suddenly everybody wanted to hire experts in this deep-learning AI theory inspired by how the brain recognized patterns.
“We had to sell otherwise we would have been torn to pieces.”
Although he was just starting to gain mainstream fame as a forward-thinking tycoon, Musk had a reputation in tech circles for being capricious, firing staffers out of the blue and ousting the cofounder of Tesla.
For Utopia, for Money
when it came to how Google made money today, that process wasn’t very high-tech or innovative: it had become an enormous advertising company, like Facebook. The vast majority of Google’s profits and revenues came from tracking people’s personal information to target them with ads, through search, YouTube, and Gmail, and on millions of websites and apps that used the Google Display Network. There was something a little disconcerting about that for someone like Hassabis, who wanted to use AI to help the world. But he also knew that if he didn’t bite, Google could end up poaching his staff and
...more
A neural network is a type of software that gets built by being trained over and over with lots of data. Once it’s been trained, it can recognize faces, predict chess moves, or recommend your next Netflix movie. Also known as a “model,” a neural network is often made up of many different layers and nodes that process information in a vaguely similar way to our brain’s neurons. The more the model is trained, the better those nodes get at predicting or recognizing things.
What Ng had really wanted to do with his scientific research was free humanity from mental drudgery, in the same way the Industrial Revolution had liberated us from constant physical labor. Stronger AI systems would do the same for professional workers, he believed, “so we can all pursue intellectually more exciting, high-level tasks.”
The basic premise of transhumanism is that the human race is currently second-rate. With the right scientific discoveries and technology, we might one day evolve beyond our physical and mental limits into a new, more intelligent species. We’ll be smarter and more creative, and we’ll live longer. We might even manage to meld our minds with computers and explore the galaxy.
LessWrong had become the internet’s most influential hub for AI apocalypse fears, and some press reports pointed out that it had all the trappings of a modern doomsday cult.
Google was buying DeepMind for $650 million. It was considerably less than what the founders would have gotten from Zuckerberg but a huge amount of money for a British technology company, and it came with that all-important agreement to keep control of AGI out of the hands of a large corporation.
being part of Google, they had access to the world’s best supercomputers and the most data for training AI models too.
the best part was that DeepMind made sure you didn’t feel like you were working for an advertising giant. You were conducting research at a prestigious scientific organization that published papers in peer-reviewed journals like Science and Nature and solving the world’s biggest problems. It was the best of both worlds, if such a thing were possible.
Demis and Mustafa were extraordinary, amazing storytellers. They balanced each other incredibly well.” Hassabis was the serious brain who read scientific papers late into the night, who talked through methodologies for hours with his top researchers, and who also tended not to consort with lower-ranked staff who didn’t have PhDs. It was Hassabis who fashioned a deeply hierarchical culture at DeepMind that was largely based on academic repute. Suleyman was the charismatic visionary when it came to rendering a vision of the future that everyone was working toward. One former staffer says he was
...more
While that gave DeepMind the talent and computing resources it needed to build AGI, the situation was a double-edged sword. When they did create AGI, Google would almost certainly want to monetize and control it. They weren’t sure how, exactly, but the board would at least make sure their human-level AI wouldn’t be misused.
Google was preparing to turn itself into a conglomerate called “Alphabet,” which would allow its various business divisions to operate with more independence. The executive told the founders that these new divisions would be called “autonomous units.” It would be like becoming an independent company again. They would get their own budgets, balance sheets, boards, and even outside investors. The idea sounded promising.
Out of view, Google’s real goal was to boost its share price, which had been stagnating. For years, Wall Street analysts had been struggling to evaluate Google’s bundle of other businesses outside of YouTube, Android, and its lucrative search engine. It had all these other businesses, too, like a smart thermostat company called Nest, a biotech research firm called Calico, a venture capital unit, and the “moonshot” X lab. Most of these divisions didn’t make any money, but if they were turned into separate firms housed under a parent company, that could loosen up the company’s balance sheet and
...more
To make matters worse, this new contender might even be exploiting his ideas. OpenAI had seven people listed as cofounders on its website. When Hassabis took a closer look at the names, he realized that five of them had worked as consultants and interns at DeepMind for several months. That’s when he became livid, according to people who worked with him. Hassabis had been an open book with DeepMind staff about the different strategies they needed to chase to reach AGI, such as building autonomous agents or teaching AI models to play games like Chess and Go. Now five scientists who’d heard all
...more
Deepening the humiliation, DeepMind leaders caught wind that Musk was trash-talking Hassabis to his contacts in Silicon Valley, according to people who worked at DeepMind and OpenAI. When the billionaire was talking to all the new staff at OpenAI, for instance, he warned them about DeepMind’s work in England and suggested Hassabis was a shady character. He cast suspicion over the way Hassabis had designed Evil Genius, a game where you played a villain trying to build a doomsday device and dominate the world. Whoever created games like that was probably a little maniacal themselves. OpenAI’s
...more
He’d also been picking up a more paranoid, pessimistic view of AI that tracked with his tendency to take things to their extreme. He could have, for instance, simply fought oil companies to tackle climate change, but decided to make humans an interplanetary species instead. He could have bought a stake in Twitter when he resolved it was too woke, but he bought the whole company. Maybe it was Musk’s habit of taking drastic action, his tendency to exaggerate, or his belief in his role as humanity’s savior, but within a couple of years of investing in DeepMind, the tycoon was tunneling deep into
...more
For all of Musk’s apocalyptic views and moral convictions that he should reach AGI before Demis Hassabis, building AI that was as capable as Google’s would also
boost his businesses. It was a profitable endeavor. Only that could explain why he agreed to work on that with one of the best-connected entrepreneurs in Silicon Valley: Sam Altman, the guy who turned “millions” into “billions” on slide decks, the guy who’d stuffed Y Combinator with futuristic start-ups, and the guy whose ambitions for AI were as big and far-reaching as Larry Page’s.
For Altman, building an all-purpose AI system was like taking all the technology start-ups he’d ever mentored in Y Combinator and putting them into one big Swiss Army knife. This powerful machine intelligence could be infinitely capable. Who knew if we’d even need businesses or start-ups anymore when a new superintelligence could generate enough wealth to keep everyone on Earth economically thriving? While Hassabis had believed that AGI would unlock the mysteries of science and the divine, Altman would say he saw it as the route to financial abundance for the world.
Hiring these people wouldn’t be so easy. Some of them were earning seven-figure salaries with companies like Google and Facebook, and Altman and Brockman couldn’t offer anywhere near those amounts. What they did have was a compelling mission to change the world and two prestigious names running the show. Elon Musk was now a globally revered tycoon, and running Y Combinator had elevated Altman’s status in the Valley to someone everybody wanted an introduction to. For AI researchers, even a short stint at this new nonprofit group offered prestigious connections and a potential career boost that
...more
Hinton now worked for Google; Fei-Fei Li left Stanford for Google; LeCun, for Facebook. Ng left Stanford for Google and then China’s Baidu. Even the top universities like Stanford, Oxford, and the Massachusetts Institute of Technology could barely hold on to their star academics, leaving a vacuum where the next generation of educators was meant to be. AI research became more secretive and more geared toward making money. That’s why Musk and Altman’s push for their research to be open to the public was so refreshing to researchers. Someone was finally addressing the concentration of AI
...more
A second reason was the data and computing power needed to run experiments in AI research. Universities typically have a limited number of GPUs, or graphics processing units, which are the powerful semiconductors made by Nvidia that run most of the servers training AI models today. When Pantic was working in academia, she managed to purchase sixteen GPUs for her entire group of thirty researchers. With so few chips, it would take them months to train an AI model. “This was ridiculous,” she says. Not long after she joined Samsung, she got access to two thousand GPUs. All that extra processing
...more
One 2022 study found that over the previous decade, the number of academic papers that had ties to Big Tech firms had more than tripled to 66 percent. Their growing presence “closely resembles strategies used by Big Tobacco,” said the authors of the study,
When it came to making AI smarter, more was better. As he kicked off his research at OpenAI, Sutskever and his team focused on making AI models that were as capable as possible, not necessarily as equitable, fair, or private. In very simple terms, there was a formula for doing that. If you trained an AI model with more and more data, and you also raised the number of parameters the model had, and you also boosted the computing power used for training, the AI model would become more proficient.
“If you have a very large dataset and a very large neural network, success is guaranteed,” Sutskever said at one AI conference. The last three words of that statement became his catchphrase among AI scientists, all the more so after OpenAI’s big launch, as the field took on a new air of excitement about this new nonprofit led by a brilliant scientist and several of Silicon Valley’s biggest power brokers.
“Having systems that fail unpredictably is not a good thing,”
To build AGI, OpenAI’s founding team needed to attract more money and talent, so they tried focusing on projects that could generate positive stories in the press.
Their early researchers created a computer that could beat the top human champions at Dota, a strategic 3D video game, and they also built a five-fingered robotic hand, powered by a neural network, that could solve a Rubik’s Cube. These projects were aimed at keeping Elon Musk happy by trying to one-up the work happening across the Atlantic in the secretive offices of DeepMind.
“I was one of the investors in DeepMind, and I was very concerned that Larry [Page] thinks Demis works for him. Actually, Demis just works for himself,” Musk said, according to a person who was there. “And I don’t trust Demis.” The researchers were astonished. To many of them, it sounded like Musk had a personal issue with Hassabis more than any particular worry about where AI was headed. When he was asked about his antagonism for Hassabis, he mentioned the computer games that the British entrepreneur had designed in the past that focused on world domination. At the same session, Musk
...more
Researchers on the Dota project, for instance, couldn’t understand why they were working on a game simulation if their ultimate goal was to build an AGI that would make people’s lives better. The reason was they needed Musk’s money. “If we don’t work on this, OpenAI might not exist in a few years, or even next year,” Brockman told the researchers.
the more they chased DeepMind in those fields, the more Altman and his leadership team realized that these approaches to AI didn’t promise much real-world impact. That’s when OpenAI started to evolve into a very different kind of organization to DeepMind. While DeepMind had a hierarchical, academic culture that prized its PhD staff, OpenAI’s culture was more engineering-led. Many of its top researchers were programmers, hackers, and former start-up founders at Y Combinator. They tended to be more interested in building things and making money than in making a discovery and achieving prestige
...more
Many staff at OpenAI knew that was hogwash. They suspected that as much as Musk said he cared about creating safer AI, he also wanted to be the person who built the most capable AI. He was already the wealthiest man on Earth and gaining unprecedented sway over American infrastructure: NASA was putting astronauts into space with SpaceX; Tesla was leading the charge on electric vehicle standards; and Musk’s satellite internet company, Starlink, was on course to try to shape the outcome of the Ukraine war.
DeepMind was trying to change its governance structure so that a profit-motivated monopoly in the form of Google wouldn’t have free rein to monetize AGI. Instead, a council of expert advisors would keep things in check. Altman and Musk had established OpenAI as a nonprofit and promised to share its research and even its patents with other organizations if it looked like they were getting closer to the threshold of superintelligent machines. That way it would prioritize humanity.
For instance, he’d suggest that as their AI got more powerful and potentially dangerous, DeepMind could hire Terence Tao, a professor at University of California, Los Angeles, who was widely regarded as one of the world’s greatest living mathematicians. A former child prodigy who went to college at the age of nine, Tao had become known as a Mr. Fix-It for frustrated researchers, according to New Scientist magazine.
AI alignment, the practice of making AI more “aligned” to human values to prevent it going rogue.
Instead, its founders contrived a completely new legal structure they called a global interest company or GIC. The idea was that DeepMind would become an organization that was more like a division of the United Nations, a transparent and responsible steward of AI for humanity’s sake. It would give Alphabet an exclusive license so that any AI breakthroughs DeepMind made that could support Google’s search business would flow to the technology giant. But DeepMind would spend the majority of its money, talent, and research on advancing its social mission, working on drug discovery and better
...more
Originating in China more than 2,500 years ago, Go looked deceptively simple. It is played on a nineteen-by-nineteen grid board with a few handfuls of black and white stones. The players each take turns placing a stone on an intersection of the grid. The goal: capture territory on the board by surrounding empty points with your stones, and get your opponent’s stones too. It’s one of the most strategically complex games in existence, with the number of board positions in the order of 10^170, dwarfing the estimated number of atoms in the observable universe, which is closer to 10^80.