More on this book
Community
Kindle Notes & Highlights
by
Fei-Fei Li
Read between
November 9 - November 30, 2023
As a lifelong academic on my way to testify before the House Committee on Science, Space, and Technology on the topic of artificial intelligence, I suppose nerves were to be expected.
It was June 26, 2018,
I’d recently cofounded AI4ALL, an educational nonprofit intended to foster greater inclusion in STEM (science, technology, engineering, and mathematics) by opening university labs to girls,
was past the midpoint of a twenty-one-month sabbatical from my professorship at Stanford and was serving as chief scientist of AI at Google Cloud—
2
3
Yann LeCun would one day serve as Facebook’s chief scientist of AI,
America. Unassuming but ambitious, he’d been making waves in recent years by demonstrating the startling capabilities of an algorithm called a “neural network” to
Increased attention was being paid to algorithms that solved problems by discovering patterns from examples, rather than being explicitly programmed—in other words, learning what to do rather than being told. Researchers gave it a fitting name: “machine learning.”
Turing succinctly contrasted “rule-based AI,” in which a complete agent capable of intelligent behavior is built from scratch, and machine learning, in which such an agent is allowed to develop on its own, asking: “Instead of trying to produce a programme to simulate the adult mind, why not rather try to produce one which simulates the child’s?”
In contrast to what lies beneath the hood of a car or inside a cell phone, the brain isn’t an assembly of cleanly distinguished components—at least not in a form any human designer would consider intuitive. Instead, one finds a web of nearly 100 billion neurons—tiny, finely focused units of electrochemical transmission—connecting with one another in vast networks.
or even thinking in the abstract. Moreover, the structure of these networks is almost entirely learned, or at least refined, long after the brain’s initial formation in utero.
extending the basic model of the neuron with the notion that certain inputs tend to exert greater influence over its behavior than others,
When those influences are allowed to change over time, growing stronger or weaker in response to success or failure in completing a task, a network of neurons can, in essence, learn.
Rosenblatt applied this principle to an array of four hundred light sensors arranged in the form of a 20-by-20-pixel camera. By wiring the output of each sensor into the perceptron, it could learn to identify visual patterns, such as shapes drawn on index cards held before it.
Because the initial influence of each sensor was randomly set, the system’s attempts at classifying what it saw started out random as well. In response, Rosenblatt, serving as the perceptron’s teacher, used a swit...
This highlight has been truncated due to consecutive passage length restrictions.
As the process was repeated, the perceptron incrementally arrived at a reliable ability to tell one shape from another.
In 1959, neurophysiologists David Hubel and Torsten Wiesel conducted an experiment at Harvard that yielded a seminal glimpse into the mammalian brain—specifically, the visual cortex of a cat.
Hubel and Wiesel’s epiphany was that perception doesn’t occur in a single layer of neurons, but across many, organized in a hierarchy that begins with the recognition of superficial details and ends with complex, high-level awareness.
Hubel and Wiesel’s work transformed the way we think about sensory perception, earning the duo a Nobel Prize in 1981.
David E. Rumelhart published a letter in the scientific journal Nature presenting a technique that made it possible for algorithms like the neocognitron to effectively learn. They called it “backpropagation,” named for its defining feature: a cascading effect in which each instance of training—specifically, the degree to which a network’s response to a given stimulus is correct or incorrect—ripples from one end to the other, layer by layer.
Although Rumelhart was the lead researcher, it was Geoff Hinton, one of his two coauthors, who would become the figure most associated with backpropagation.
4
5

