Artificial Intelligence: A Guide for Thinking Humans
Rate it:
Open Preview
Read between October 24 - November 2, 2025
5%
Flag icon
The issue that worries him is really one of complexity. He fears that AI might show us that the human qualities we most value are disappointingly simple to mechanize.
5%
Flag icon
such minds of infinite subtlety and complexity and emotional depth could be trivialized by a small chip, it would destroy my sense of what humanity is about.”
6%
Flag icon
While assuming that these AI researchers underestimated humans, had I in turn underestimated the power and promise of current-day AI?
6%
Flag icon
try to sort out how far artificial intelligence has come, as well as elucidate its disparate—and sometimes conflicting—goals. In
7%
Flag icon
The proposed study was, they wrote, based on “the conjecture that every aspect of learning or any other feature of intelligence can be in principle so precisely described that a machine can be made to simulate
7%
Flag icon
Artificial intelligence inherits this packing problem, sporting different meanings in different contexts.
7%
Flag icon
Indeed, the word intelligence is an over-packed suitcase, zipper on the verge of breaking.
7%
Flag icon
distinctions. Instead, it has focused on two efforts: one scientific and one practical. On the scientific side, AI researchers are investigating the mechanisms of “natural” (that is, biological) intelligence by trying to embed it in computers.
7%
Flag icon
On the practical side, AI proponents simply want to create computer programs that perform tasks as well as or better than humans, without worrying about whether these programs are actually thinking in the way humans think. When asked if their motivations are practical or scientific, many AI people joke
8%
Flag icon
But since the 2010s, one family of AI methods—collectively called deep learning (or deep neural networks)—has risen above the anarchy to become the dominant AI paradigm.
8%
Flag icon
AI is a field that includes a broad set of approaches, with the goal of creating machines with intelligence. Deep learning is only one such approach.
8%
Flag icon
better understand these various distinctions, it’s important to understand a philosophical split that occurred early in the AI research community: the split between so-called symbolic and subsymbolic AI.
8%
Flag icon
symbolic AI program’s knowledge consists of words or phrases (the “symbols”), typically understandable to a human, along with rules by which the program can combine and process these symbols in order to perform its assigned task.
8%
Flag icon
Symbolic AI of the kind illustrated by GPS ended up dominating the field for its first three decades, most notably in the form of expert systems, in which human experts devised rules for computer programs to use in tasks such as medical diagnosis and legal decision-making.
8%
Flag icon
In contrast, subsymbolic approaches to AI took inspiration from neuroscience and sought to capture the sometimes-unconscious thought processes underlying what some have called fast perception, such as recognizing faces or identifying spoken words.
9%
Flag icon
Roughly speaking, a neuron sums up all the inputs it receives from other neurons, and if the total sum reaches a certain threshold level, the neuron fires. Importantly, different connections (synapses) from other neurons to a given neuron have different strengths; in calculating the sum of its inputs, the given neuron gives more weight to inputs from stronger connections than inputs from weaker connections.
9%
Flag icon
Neuroscientists believe that adjustments to the strength of connections between neurons is a key part of how learning takes place in the brain.
9%
Flag icon
Rosenblatt’s idea was that the perceptron should similarly be trained on examples: it should be rewarded when it fires correctly and punished when it errs. This form of conditioning is now known in AI as supervised learning. During training, the learning system is given an example, it produces an output, and it is then given a “supervision signal,” which tells how much the system’s output differs from the correct output. The system then uses this signal to adjust its weights and threshold.
9%
Flag icon
Perhaps the most important term in computer science is algorithm, which refers to a “recipe” of steps a computer can take in order to solve a particular problem.
12%
Flag icon
The network shown in figure 4 is referred to as “multilayered” because it has two layers of units (hidden
12%
Flag icon
networks that have more than one layer of hidden units are called deep networks.
12%
Flag icon
However, unlike in a perceptron, a unit here doesn’t simply “fire” or “not fire” (that is, produce 1 or 0) based on a threshold; instead, each unit uses its sum to compute a number between 0 and 1 that is called the unit’s “activation.”
12%
Flag icon
If the sum that a unit computes is low, the unit’s activation is close to 0; if the sum is high, the activation is close to
12%
Flag icon
called back-propagation—for training these networks. As its name implies, back-propagation is a way to take an error observed at the output units (for example, a high confidence for the wrong digit in the example of figure 4) and to “propagate” the blame for that error backward (in figure 4, this would be from right to left) so as to assign proper blame to each of the weights in the network. This allows back-propagation to determine how much to change each weight in order to reduce the error.
13%
Flag icon
Symbolic systems can be engineered by humans, be imbued with human knowledge, and use human-understandable reasoning to solve problems.
13%
Flag icon
subsymbolic systems tend to be hard to interpret, and no one knows how to directly program complex human knowledge or logic into these systems. Subsymbolic systems seem much better suited to perceptual or motor tasks for which humans can’t easily define rules.
15%
Flag icon
mind.14 We’re back to the philosophical question I was discussing with my mother: Is there a difference between “simulating a mind” and “literally having a mind”?
20%
Flag icon
the deep in deep learning doesn’t refer to the sophistication of what is learned; it refers only to the depth in layers of the network being trained.
21%
Flag icon
David Hubel and Torsten Wiesel were later awarded a Nobel Prize for their discoveries of hierarchical organization in the visual systems of cats and primates (including humans) and for their explanation of how the visual system transforms light striking the retina into information about what is in the scene.
21%
Flag icon
convolutional neural networks, or (as most people in the field call them) ConvNets.
21%
Flag icon
the design of ConvNets is based on several key insights about the brain’s visual system that were discovered by Hubel and Wiesel in the 1950s and ’60s.
21%
Flag icon
the role of these backward connections is not well understood by neuroscientists, although it is well established that our prior knowledge and expectations, presumably stored in higher brain layers, strongly influence what we perceive.
28%
Flag icon
is inaccurate to say that today’s successful ConvNets learn “on their own.” As we saw in the previous chapter, in order for a ConvNet to learn to perform a task, a huge amount of human effort is required to collect, curate, and label the data, as well as to design the many aspects of the ConvNet’s architecture.
31%
Flag icon
The fear is that if we don’t understand how AI systems work, we can’t really trust them or predict the circumstances under which they will make errors.
31%
Flag icon
one of the hottest new areas of AI is variously called “explainable AI,” “transparent AI,” or “interpretable machine learning.”
32%
Flag icon
Calling this an “intriguing property” of neural networks is a little like calling a hole in the hull of a fancy cruise liner a “thought-provoking facet” of the ship. Intriguing, yes, and more investigation is needed, but if the leak is not fixed, this ship is
32%
Flag icon
If deep-learning systems, so successful at computer vision and other tasks, can easily be fooled by manipulations to which humans are not susceptible, how can we say that these networks “learn like humans”
32%
Flag icon
And computer vision isn’t the only domain in which networks can be fooled; researchers have also designed attacks that fool deep neural networks that deal with language, including speech recognition and text analysis.
33%
Flag icon
Clever Hans has become a metaphor for any individual (or program!) that gives the appearance of understanding but is actually responding to unintentional cues given by a trainer.
34%
Flag icon
The AI researcher Andrew Ng has optimistically proclaimed, “AI is the new electricity.” Ng explains
34%
Flag icon
“Just as electricity transformed almost everything 100 years ago, today I actually have a hard time thinking of an industry that I don’t think AI will transform in the next several years.”
34%
Flag icon
the Great AI Trade-Off. Should
35%
Flag icon
Loss of privacy is not the only danger
35%
Flag icon
face-recognition systems can make errors. If your face is matched in error, you might be barred from a store or an airplane flight or wrongly accused of a crime.
35%
Flag icon
significantly higher error rate on people of color than on white people.
35%
Flag icon
To what extent should AI research and development be regulated, and who should do the regulating?
35%
Flag icon
Simply leaving regulation up to AI practitioners would be as unwise as leaving it solely up to government agencies.
36%
Flag icon
believe that regulation of AI should be modeled on the regulation of other technologies, particularly those in biological and medical sciences, such as genetic engineering. In those fields, regulation—such as quality assurance and the analysis of risks
36%
Flag icon
Could machines themselves be able to have their own sense of morality, complete enough for us to allow them to make ethical decisions on their own, without humans having to oversee them?
36%
Flag icon
Asimov’s stories often focused on the unintended consequences of programming ethical rules into robots.
« Prev 1