The Worlds I See: Curiosity, Exploration, and Discovery at the Dawn of AI
Rate it:
Open Preview
1%
Flag icon
As a lifelong academic on my way to testify before the House Committee on Science, Space, and Technology on the topic of artificial intelligence, I suppose nerves were to be expected.
1%
Flag icon
It was June 26, 2018,
1%
Flag icon
I’d recently cofounded AI4ALL, an educational nonprofit intended to foster greater inclusion in STEM (science, technology, engineering, and mathematics) by opening university labs to girls,
1%
Flag icon
was past the midpoint of a twenty-one-month sabbatical from my professorship at Stanford and was serving as chief scientist of AI at Google Cloud—
2%
Flag icon
Greg Brockman, cofounder and chief technology officer of a recently founded start-up called OpenAI.
MarkGrabe Grabe
Brock man is from Thompson, ND, and UND
3%
Flag icon
2
10%
Flag icon
3
13%
Flag icon
Yann LeCun would one day serve as Facebook’s chief scientist of AI,
13%
Flag icon
America. Unassuming but ambitious, he’d been making waves in recent years by demonstrating the startling capabilities of an algorithm called a “neural network” to
13%
Flag icon
Increased attention was being paid to algorithms that solved problems by discovering patterns from examples, rather than being explicitly programmed—in other words, learning what to do rather than being told. Researchers gave it a fitting name: “machine learning.”
13%
Flag icon
Turing succinctly contrasted “rule-based AI,” in which a complete agent capable of intelligent behavior is built from scratch, and machine learning, in which such an agent is allowed to develop on its own, asking: “Instead of trying to produce a programme to simulate the adult mind, why not rather try to produce one which simulates the child’s?”
14%
Flag icon
In contrast to what lies beneath the hood of a car or inside a cell phone, the brain isn’t an assembly of cleanly distinguished components—at least not in a form any human designer would consider intuitive. Instead, one finds a web of nearly 100 billion neurons—tiny, finely focused units of electrochemical transmission—connecting with one another in vast networks.
14%
Flag icon
or even thinking in the abstract. Moreover, the structure of these networks is almost entirely learned, or at least refined, long after the brain’s initial formation in utero.
14%
Flag icon
extending the basic model of the neuron with the notion that certain inputs tend to exert greater influence over its behavior than others,
14%
Flag icon
When those influences are allowed to change over time, growing stronger or weaker in response to success or failure in completing a task, a network of neurons can, in essence, learn.
14%
Flag icon
Rosenblatt applied this principle to an array of four hundred light sensors arranged in the form of a 20-by-20-pixel camera. By wiring the output of each sensor into the perceptron, it could learn to identify visual patterns, such as shapes drawn on index cards held before it.
14%
Flag icon
Because the initial influence of each sensor was randomly set, the system’s attempts at classifying what it saw started out random as well. In response, Rosenblatt, serving as the perceptron’s teacher, used a swit...
This highlight has been truncated due to consecutive passage length restrictions.
14%
Flag icon
As the process was repeated, the perceptron incrementally arrived at a reliable ability to tell one shape from another.
15%
Flag icon
In 1959, neurophysiologists David Hubel and Torsten Wiesel conducted an experiment at Harvard that yielded a seminal glimpse into the mammalian brain—specifically, the visual cortex of a cat.
15%
Flag icon
Hubel and Wiesel’s epiphany was that perception doesn’t occur in a single layer of neurons, but across many, organized in a hierarchy that begins with the recognition of superficial details and ends with complex, high-level awareness.
16%
Flag icon
Hubel and Wiesel’s work transformed the way we think about sensory perception, earning the duo a Nobel Prize in 1981.
16%
Flag icon
David E. Rumelhart published a letter in the scientific journal Nature presenting a technique that made it possible for algorithms like the neocognitron to effectively learn. They called it “backpropagation,” named for its defining feature: a cascading effect in which each instance of training—specifically, the degree to which a network’s response to a given stimulus is correct or incorrect—ripples from one end to the other, layer by layer.
16%
Flag icon
Although Rumelhart was the lead researcher, it was Geoff Hinton, one of his two coauthors, who would become the figure most associated with backpropagation.
21%
Flag icon
4
30%
Flag icon
5