The Myth of Artificial Intelligence: Why Computers Can’t Think the Way We Do
Rate it:
Open Preview
Kindle Notes & Highlights
1%
Flag icon
The myth of artificial intelligence is that its arrival is inevitable, and only a matter of time—that we have already embarked on the path that will lead to human-level AI, and then superintelligence. We have not. The path exists only in our imaginations.
1%
Flag icon
all evidence suggests that human and machine intelligence are radically different. The myth of AI insists that the differences are only temporary, and that more powerful systems will eventually erase them.
1%
Flag icon
This is a profound mistake: success on narrow applications gets us not one step closer to general intelligence. The inferences that systems require for general intelligence—to read a newspaper, or hold a basic conversation, or become a helpmeet like Rosie the Robot in The Jetsons—cannot be programmed, learned, or engineered with our current knowledge of AI. As we successfully apply simpler, narrow versions of intelligence that benefit from faster computers and lots of data, we are not making incremental progress, but rather picking low-hanging fruit. The jump to general “common sense” is ...more
4%
Flag icon
Gödel proved that there must exist some statements in any formal (mathematical or computational) system that are True, with capital-T standing, yet not provable in the system itself using any of its rules. The True statement can be recognized by a human mind, but is (provably) not provable by the system it’s formulated in.
4%
Flag icon
Gödel hit upon the rare but powerful property of self-reference. Mathematical versions of self-referring expressions, such as “This statement is not provable in this system,” can be constructed without breaking the rules of mathematical systems. But the so-called self-referring “Gödel statements” introduce contradictions into mathematics: if they are true, then they are unprovable. If they are false, then because they say they are unprovable, they are actually true. True means false, and false means true—a contradiction.
7%
Flag icon
Turing’s great genius, and his great error, was in thinking that human intelligence reduces to problem-solving.
7%
Flag icon
every observation that shapes the complex ideas and judgments of intelligence begins with a guess,
8%
Flag icon
Social intelligence is also conspicuously left out of Turing’s puzzle-solving view of intelligence.
8%
Flag icon
First, intelligence is situational—there is no such thing as general intelligence. Your brain is one piece in a broader system which includes your body, your environment, other humans, and culture as a whole. Second, it is contextual—far from existing in a vacuum, any individual intelligence will always be both defined and limited by its environment. (And currently, the environment, not the brain, is acting as the bottleneck to intelligence.) Third, human intelligence is largely externalized, contained not in your brain but in your civilization.
8%
Flag icon
the idea of programming intuition ignores a fundamental fact about our own smarts. Humans have social intelligence. We have emotional intelligence. We use our minds other than to solve problems and puzzles, however complex (or rather, especially when the problems are complex).
8%
Flag icon
sometime in the 1940s, after his work at Bletchley but certainly by the time of his 1950 paper prefiguring AI, Turing had settled his thoughts on a simplified view of intelligence. This was an egregious error and, further, one that has been passed down through generations of AI scientists, right up to the present day.
8%
Flag icon
The problem-solving view of intelligence helps explain the production of invariably narrow applications of AI throughout its history.
8%
Flag icon
Treating intelligence as problem-solving thus gives us narrow applications.
8%
Flag icon
There is an inverse correlation between a machine’s success in learning some one thing, and its success in learning some other thing. Even seemingly similar tasks are inversely related in terms of performance.
9%
Flag icon
bias is actually necessary in machine learning—it’s part of learning itself.
9%
Flag icon
A well-known theorem called the “no free lunch” theorem proves exactly what we anecdotally witness when designing and building learning systems. The theorem states that any bias-free learning system will perform no better than chance when applied to arbitrary problems.
9%
Flag icon
tuning a system to learn what’s intended by imparting to it a desired bias generally means causing it to become narrow, in the sense that it won’t then generalize to other domains.
9%
Flag icon
People who assume that extensions of modern machine learning methods like deep learning will somehow “train up,” or learn to be intelligent like humans, do not understand the fundamental limitations that are already known.
9%
Flag icon
Admitting the necessity of supplying a bias to learning systems is tantamount to Turing’s observing that insights about mathematics must be supplied by human minds from outside formal methods, since machine learning bias is determined, prior to learning, by human designers.
9%
Flag icon
the problem-solving view of intelligence necessarily produces narrow applications, and is therefore inadequate for the broader goals of AI. We inherited this view of intelligence from Alan Turing.
9%
Flag icon
AI began producing narrow problem-solving applications, and it is still doing so to this day.
9%
Flag icon
General (non-narrow) intelligence of the sort we all display daily is not an algorithm running in our heads, but calls on the entire cultural, historical, and social context within which we think and act in the world.
9%
Flag icon
learning itself is a kind of problem-solving, made possible only by introducing a bias into the learner that simultaneously makes possible the learning of a particular application, while reducing performance on other applications.
9%
Flag icon
Learning systems are actually just narrow problem-solving systems, too. Given that there is no known theoretical bridge from such narrow systems to general intelligence of the sort displayed by humans, AI has fallen into a trap.
10%
Flag icon
we have no evidence in the biological world of anything intelligent ever designing a more intelligent version of itself.
10%
Flag icon
One major problem with assumptions about increases in intelligence in AI circles is the problem of circularity: it takes (seemingly general) intelligence to increase general intelligence.
11%
Flag icon
supercomputers without real intelligence just get to wrong answers more quickly.
12%
Flag icon
the powers of the human mind outstrip our ability to mechanize it in the sense necessary for “scaling up,”
12%
Flag icon
Theories about mind power evolving out of technology aren’t testable.
13%
Flag icon
Good’s proposal, in other words, is based once again on an inadequate and simplified view of intelligence. It presupposes the original intelligence error, and adds to it yet another reductive sleight of hand: that an individual mechanical intelligence can design and construct a greater one.
18%
Flag icon
Kitsch has its roots, typically, in a larger system of thought. For the communists, it was Marxism. With the inevitability myth, it’s technoscience. We inherited the technoscientific worldview most directly from the work of August Comte.
19%
Flag icon
Homo faber, in Greek terms, is a person who believes that techne—knowledge of craft or making things, the root of technology—defines who we are. The faberian understanding of human nature fits perfectly not only with Comte’s nineteenth-century idea of a utopian technoscience but with the twentieth-century obsession with building more and more powerful technologies, culminating in the grand project of, in effect, building ourselves—artificial intelligence.
19%
Flag icon
Technoscience began with the Scientific Revolution, and by a few hundred years later much of modern scientific theory was in place.
20%
Flag icon
equating a mind with a computer is not scientific, it’s philosophical.
20%
Flag icon
theorists—experts with big visions of the future based on a particular theory they endorse—tend to make worse predictions than pragmatic people, who see the world as complicated and lacking a clear fit with any single theory.
21%
Flag icon
Predictions about scientific discoveries are perhaps best understood as indulgences of mythology; indeed, only in the realm of the mythical can certainty about the arrival of artificial general intelligence abide, untrammeled by Popper’s or MacIntyre’s or anyone else’s doubts.
21%
Flag icon
Mythology about AI is not all bad. It keeps alive archetypal longings for creating life and intelligence, and can open windows into understanding ourselves. But when myth masquerades as science and certainty, it confuses the public, and frustrates non-mythological researchers who know that major theoretical obstacles remain unsolved.
24%
Flag icon
The idea that the coming superintelligence will somehow be laser-focused and uber-competent at achieving an objective yet have zero common sense seems to cut against the grain of superintelligence itself—which is, after all, supposed to be human intelligence plus more.
27%
Flag icon
Given that our own thinking is a puzzling series of guesswork, how can we hope to program it?
42%
Flag icon
Data-driven methods generally suffer from what we might call an empirical constraint.
48%
Flag icon
The origin of intelligence, then, is conjectural or abductive, and of paramount importance is having a powerful conceptual framework within which to view facts or data.
55%
Flag icon
AI lacks a fundamental theory—a theory of abductive inference.
60%
Flag icon
What people mean is almost never a literal function of what they word-for-word say. This feature of ordinary talk, studied in linguistics as pragmatics, is what makes language interpretation hard for AI, but meaningful and interesting—and generally easy and natural—for people.
71%
Flag icon
computers don’t have insights. People do. And collaborative efforts are only effective when individuals are valued. Someone has to have an idea.
73%
Flag icon
Like induction compared to abduction, technology is downstream of theory,
73%
Flag icon
a core conceit in mythology about AI itself: that insights, theories, and hypotheses unknown and even unknowable at smaller scales will somehow pop into existence once we collect and analyze enough data, using machine learning and other inductive approaches.
77%
Flag icon
without existing theories, Big Data AI falls victim to overfitting, saturation, and blindness from data-inductive methods generally.
78%
Flag icon
Wiener pointed out what we all know, or should know, which is that ideas emerge from cultures that value individual intellects: “New ideas are conceived in the intellects of individual scientists, and they are particularly likely to originate where there are many well-trained intellects, and above all where intellect is valued.”
79%
Flag icon
mythology about the coming of superintelligent machines replacing humans makes concern over anti-intellectual and anti-human bias irrelevant. The very point of the myth is that anti-humanism is the future; it’s baked into the march of existing technology.
79%
Flag icon
there is no way for current AI to “evolve” general intelligence in the first place, absent a fundamental discovery. Simply saying “we’re getting there” is scientifically and conceptually bankrupt, and further fans the flames of antihuman and anti-intellectual forces interested in (seemingly) controlling and predicting outcomes for, among other reasons, maximizing short-term profit by skewing discussion toward inevitability.