Genius Makers: The Mavericks Who Brought AI to Google, Facebook, and the World
Rate it:
Open Preview
Kindle Notes & Highlights
4%
Flag icon
The rise of deep learning marked a fundamental change in the way digital technology was built. Rather than carefully defining how a machine was supposed to behave, one rule at a time, one line of code at a time, engineers were beginning to build machines that could learn tasks through their own experiences, and these experiences spanned such enormous amounts of digital information, no human could ever wrap their head around it all. The result was a new breed of machine that was not only more powerful than before but also more mysterious and unpredictable.
7%
Flag icon
Their original sin was that they called their field “artificial intelligence.” This gave decades of onlookers the impression that scientists were on the verge of re-creating the powers of the brain when, in reality, they were not.
7%
Flag icon
Minsky and Papert described the Perceptron in elegant detail, exceeding, in many respects, the way Rosenblatt described it himself. They understood what it could do, but they also understood its flaws.
7%
Flag icon
Following Minsky’s lead, most researchers embraced what was called “symbolic AI.”
7%
Flag icon
Frank Rosenblatt aimed to build a system that learned behavior on its own in the same way the brain did. In later years, scientists called this “connectionism,” because, like the brain, it relied on a vast array of interconnected calculations.
7%
Flag icon
Whereas neural networks learned tasks on their own by analyzing data, symbolic AI did not. It behaved according to very particular instructions laid down by human engineers—discrete rules that defined everything a machine was supposed to do in each and every situation it might encounter. They called it symbolic AI because these instructions showed machines how to perform specific operations on specific collections of symbols, such as digits and letters.
8%
Flag icon
Learning, Hebb believed, was the result of tiny electrical signals that fired along a series of neurons, causing a physical change that wired these neurons together in a new way. As his disciples said: “Neurons that fire together, wire together.” This theory—known as Hebb’s Law—helped inspire the artificial neural networks built by scientists like Frank Rosenblatt in the 1950s.
9%
Flag icon
The answer, Rumelhart suggested, was a process called “backpropation.” This was essentially an algorithm, based on differential calculus, that sent a kind of mathematical feedback cascading down the hierarchy of neurons as they analyzed more data and gained a better understanding of what each weight should be.
12%
Flag icon
Learning, he believed, was inextricable from intelligence. “Any animal with a brain can learn,” he often said.
12%
Flag icon
His breakthrough was a variation on the neural network modeled on the visual cortex, the part of the brain that handles sight. Inspired by the work of a Japanese computer scientist named Kunihiko Fukushima, he called this a “convolutional neural network.”
12%
Flag icon
The fox knows many little things and the hedgehog knows one big thing.”
12%
Flag icon
LeCun’s team built a chip for this one particular task. That meant it could handle the task at speeds well beyond the standard processors of the day: about 4 billion operations a second. This fundamental concept—silicon built specifically for neural networks—would remake the worldwide chip industry, though that moment was still two decades away.
13%
Flag icon
Brockett had a panic attack—which he thought was a heart attack—and was rushed to the hospital. He later called it his “come-to-Jesus moment,” when he realized he had spent six years writing rules that were now obsolete. “My fifty-two-year-old body had one of those moments when I saw a future where I wasn’t involved,” he says.
14%
Flag icon
If you sent an email asking if he preferred to be called Geoffrey or Geoff, his response was equal parts clever and endearing: I prefer Geoffrey. Thanks, Geoff
15%
Flag icon
“The theme in Geoff’s group was always: What’s old is new,” Dahl says. “If it’s a good idea, you keep trying for twenty years. If it’s a good idea, you keep trying it until it works. It doesn’t stop being a good idea because it doesn’t work the first time you try it.”
21%
Flag icon
The AlexNet paper would become one of the most influential papers in the history of computer science, with over sixty thousand citations from other scientists.
27%
Flag icon
don’t know how to do research unless it’s open, unless we are part of the research community,” LeCun says. “Because if you do it in secret, you get bad-quality research. You can’t attract the best. You’re not going to have people who can push the state of the art.”
28%
Flag icon
she told Bloomberg News that artificial intelligence suffered from a “sea of dudes” problem—that this new breed of technology would fall short of its promise because it was built almost entirely by men.
28%
Flag icon
The trick, Alan Eustace believed, was to surround yourself with people who could apply new kinds of expertise to problems that seemed unsolvable with the old techniques. “Most people look at particular problems from a particular point of view and a particular perspective and a particular history,” he says. “They don’t look at the intersections of expertise that will change the picture.”
31%
Flag icon
The strength of the system, he told his audience, was its simplicity. “We use minimum innovation for maximum results,” he said, as applause rippled across the crowd, catching even him by surprise. The power of a neural network, he explained, was that you could feed it data and it learned behavior on its own.
32%
Flag icon
the Google engineers beat Dean’s deadline by three months, and the difference was the TPU. A sentence that needed ten seconds to translate on ordinary hardware back in February could translate in milliseconds with help from the new Google chip.
36%
Flag icon
Oppenheimer was a world-class physicist: He understood the science of the massive task at hand. But he also had the skills needed to motivate the sprawling team of scientists that worked under him, to combine their disparate strengths to feed the larger project, and to somehow accommodate their foibles as well.
38%
Flag icon
“You can think of AI as a large math problem where it sees patterns that humans can’t see,” says Eric Schmidt, the former Google chief executive and chairman. “With a lot of science and biology, there are patterns that exist that humans can’t see, and when pointed out, they will allow us to develop better drugs, better solutions.”
40%
Flag icon
After Microsoft pulled out of the bid for Hinton’s start-up, he told Deng he could never have joined such a company. “It wasn’t the money. It was the review system,” he said. “It may be good for salespeople. But it’s not for researchers.”
42%
Flag icon
he called them “generative adversarial networks,” or GANs.
43%
Flag icon
a team of researchers at a Nvidia lab in Finland unveiled a new breed of GAN. Called “Progressive GANs,” these dueling neural networks could generate full-sized images of plants, horses, buses, and bicycles that seemed like the real thing.
43%
Flag icon
“Unfortunately, people these days are not very good at critical thinking. And people tend to have a very tribalistic idea of who’s credible and not credible.”
43%
Flag icon
When Goodfellow first arrived at Google, he began to explore a separate technique called “adversarial attacks,” showing that a neural network could be fooled into seeing or hearing things that weren’t really there. Just by changing a few pixels in a photo of an elephant—a change imperceptible to the human eye—he could fool a neural network into thinking this elephant was a car.
46%
Flag icon
As two university professors who were working on the plan told the New York Times, AlphaGo versus Lee Sedol was China’s Sputnik moment.
46%
Flag icon
China already had its own search engine, its own cloud computing services, its own AI labs, even its own TensorFlow. Called PaddlePaddle, it was built by Baidu.
46%
Flag icon
Platforms establish a base by which innovation occurs in the future.”
46%
Flag icon
China’s other advantage, he said, was data. In each socioeconomic era, he liked to say, there was one primary means of production. In the agricultural era, it was about the land. “It doesn’t matter how many people you have. It doesn’t matter how brilliant you are. You cannot produce more if you do not have more land.” In the industrial era, it was about labor and equipment. In the new era, it was about data. “Without data, you cannot build a speech recognizer. It does not matter how many people you have. You may have one million brilliant engineers, but you won’t be able to build a system that ...more
47%
Flag icon
He took a screenshot and posted it to Twitter, which he thought of as “the world’s largest cafeteria room,” a place where anyone could show up and get anyone’s attention about anything.
47%
Flag icon
As Raji soon realized, the system was learning to identify black people as pornographic. “The data we use to train these systems matters,” she says. “We can’t just blindly pick our sources.”
48%
Flag icon
AI needs to be seen as a system. And the people creating the technology are a big part of the system. If many are actively excluded from its creation, this technology will benefit a few while harming a great many.
48%
Flag icon
The major issue that I was seeing was that the standards, the measures by which we decided what progress looked like, could be misleading, and they could be misleading because of a severe lack of representation of who I call the under-sampled majority.”
48%
Flag icon
Before it was all the way on, the system recognized her face—or, at least, it recognized the mask. “Black Skin, White Masks,” she says, nodding to the 1952 critique of historical racism from the psychiatrist Frantz Fanon. “The metaphor becomes the truth. You have to fit a norm and that norm is not you.”
54%
Flag icon
The nativists stand in opposition to the empiricists, who believe that human knowledge comes mostly from learning.
57%
Flag icon
The irony was that, as Musk complained later that year, the robotic machines that helped manufacture the electric cars inside his Tesla factories weren’t as nimble as they seemed. “Excessive automation at Tesla was a mistake,” he said. “Humans are underrated.”
57%
Flag icon
He was also optimistic about the final outcome for humans. “If this happens fifty years from now,” he said, “there is plenty of time for the educational system to catch up.”
59%
Flag icon
this attitude was encapsulated by an oft-repeated quote from Machiavelli: “Make mistakes of ambition and not mistakes of sloth.”
60%
Flag icon
Reinforcement learning was ideally suited to games. Video games tallied points. But in the real world, no one was keeping score. Researchers had to define success in other ways, and that was far from trivial.
61%
Flag icon
there would always be tension between the chase for near-term technology and the distant dream.
61%
Flag icon
“We need to use the downtime, when things are calm, to prepare for when things get serious in the decades to come,” he said. “The time we have now is valuable, and we need to make use of it.”