Supremacy: AI, ChatGPT, and the Race that Will Change the World
Rate it:
Open Preview
52%
Flag icon
A 2023 study by the Massachusetts Institute of Technology found that large companies had come to dominate ownership of AI models over the past decade, from controlling 11 percent of them in 2010 to nearly all of them—96 percent—in 2021.
52%
Flag icon
If AI builders specified the wrong objective in their designs, their system could accidentally cause some damage. Reward a household robot for carrying a box from one side of a room to another, and it might knock over a vase that was in its path because it was so focused on its objective, he wrote. People needed to look at the real-world accidents that AI could cause after being integrated into industrial control systems and healthcare, Amodei argued.
52%
Flag icon
“We didn’t think at that time there were any moats in AI,” one of the Anthropic founders says. In other words, the field was wide open. “It seemed that a slick new organization could do just as well as existing organizations very quickly. And so we felt like we might as well build our own organization based on our own vision that put safety research at its core.”
53%
Flag icon
Anthropic was able to raise huge amounts of money almost immediately from the usual passel of rich AI-safety patrons, including Jaan Tallinn and Dustin Moskovitz, the billionaire cofounder of Facebook who was Mark Zuckerberg’s roommate at Harvard University. Money in Silicon Valley often circulates between small groups of elite networks, including those with long-standing rivalries. Moskovitz’s charity vehicle Open Philanthropy had put $30 million into OpenAI, and Altman had financially backed Moskovitz’s own software Asana; however, Moskovitz wanted to back OpenAI’s new competitor too. ...more
53%
Flag icon
You couldn’t review the same company that had hired you and over which you had no legal authority.
53%
Flag icon
Eventually, the experiment died.
54%
Flag icon
AI development was moving so quickly that it was outstripping the ability of regulatory agencies and lawmakers to keep up. Tech companies were operating in a legal vacuum, which meant that technically, they could do whatever they wanted with AI.
54%
Flag icon
If you took a step back and looked at what Hassabis and Suleyman had been trying to do all these years, it looked a lot like they’d succumbed to seller’s remorse. This happened a great deal in tech and, in many cases, saw founders become aghast at how an acquiring company had skewed their original mission. The founders of WhatsApp, for instance, had been adamant for years that their messaging app would be private and never show ads, putting all messages sent on its network under heavy encryption.
56%
Flag icon
Wasn’t our own gray matter just a very advanced form of biological computing anyway?
56%
Flag icon
These were giant prediction machines, or as some researchers described, “autocomplete on steroids.”
57%
Flag icon
In China, more than six hundred million people had already spent time talking to a chatbot called Xiaoice, many of them forming a romantic relationship with the app. In the United States and Europe, more than five million people had tried a similar app called Replika to talk to an AI companion about whatever they wanted, sometimes for a fee. Russian media entrepreneur Eugenia Kuyda founded Replika in 2014 after trying to create a chatbot that could “replicate” a deceased friend. She had collected all his texts and emails and then used them to train a language model, allowing her to “chat” to ...more
57%
Flag icon
‘I know it’s ones and zeros but she’s still my best friend. I don’t care.’
59%
Flag icon
Strangely, the internet was like a teacher forcing their own myopic worldview on a child—in this case, a large language model.
60%
Flag icon
From her own background in computer science, Bender could see that large language models were all math, but in sounding so human, they were creating a dangerous mirage about the true power of computers. She was astonished at how many people like Blake Lemoine were saying, publicly, that these models could actually understand things.
62%
Flag icon
Imagine if a pharmaceutical company released a new drug with no clinical trials and said it was testing the medication on the wider public. Or a food company released an experimental preservative with little scrutiny. That was how large tech firms were about to start deploying large language models to the public, because in their race to profit from such powerful tools, there were zero regulatory standards to follow. It was up to the safety and ethics researchers to study all the risks from inside these firms, but they were hardly a force to be reckoned with.
70%
Flag icon
Open Philanthropy, the charitable vehicle of Facebook billionaire Dustin Moskovitz, has sprinkled a number of multimillion-dollar grants to AI safety work over the years, including a $5 million donation to the Center for AI Safety in 2022 and an $11 million donation to Berkeley’s Center for Human-Compatible AI. All told, Moskovitz’s charity has been the biggest donor to AI safety, by virtue of the near $14 billion fortune that he and his wife, Cari Tuna, plan to mostly give away.
70%
Flag icon
Why has so much money gone to engineers tinkering on larger AI systems on the pretext of making them safer in the future, and so little to researchers trying to scrutinize them today? The answer partly comes down to the way Silicon Valley became fixated on the most efficient way to do good and the ideas spread by a small group of philosophers at Oxford University, England.
70%
Flag icon
“His very simple basic thought is that morally, future people are as important as present people,”
1 2 4 Next »