The Myth of Artificial Intelligence: Why Computers Can’t Think the Way We Do
Rate it:
Open Preview
44%
Flag icon
We can always hope that advances in feature engineering or algorithm design will lead to a more complete theory of computational inference in the future. But we should be profoundly skeptical.
44%
Flag icon
This is another way of saying what philosophers and scientists of every stripe have learned long ago: Induction is not enough.
45%
Flag icon
A saturated model is final, and won’t improve any more by adding more data. It might even get worse in some cases, although the reasons are too technical to be explained here.
45%
Flag icon
“And the answer is, it’s still improving—but we are getting to the point where we get less benefit than we did in the past.”13
45%
Flag icon
But self-driving cars, once thought to be around the corner, are still in a heavy research phase, and no doubt part of the problem is the training data from labeled video feeds, which is not insufficient in volume but is inadequate to handle long tail problems with atypical driving scenarios that nonetheless must be factored in for safety.
46%
Flag icon
to get machines to play games like Go, most notably Monte Carlo Tree Search … random sampling from a tree of different game possibilities, which has nothing intrinsic to do with deep learning.
47%
Flag icon
that human knowledge wasn’t involved simply wasn’t factually accurate.”
47%
Flag icon
DeepMind team used human inferences—namely, abductive ones—to design the system to successfully accomplish its task. These inferences were supplied from outside the inductive framework.
50%
Flag icon
AI. If deduction is inadequate, and induction is inadequate, then we must have a theory of abduction. Since we don’t (yet), we can already
50%
Flag icon
conclude that we are not on a path to artificial general intelligence.
51%
Flag icon
knowledge representation and reasoning (KR&R), which
51%
Flag icon
because we can’t infer what we don’t know, and we can’t make use of knowledge that we have without a suitable inference capability.
54%
Flag icon
Daniel Kahneman. In his 2011 best seller, Thinking, Fast and Slow,
54%
Flag icon
Type 1 and Type 2. Type 1 thinking is fast and reflexive, while Type 2 thinking involves more time-consuming and deliberate computations.
54%
Flag icon
more mindful, cautious, and questioning thinkers. Type 1 has a way of crowding out Type 2, which often leads us into fallacies and biases.
54%
Flag icon
Perceived threats are quick inferences, to be sure, but they’re still inferences.
54%
Flag icon
inference (fast or slow) is noetic, or knowledge-based.
55%
Flag icon
deductive inference ignores considerations of relevance.
55%
Flag icon
Second, inductive inference gives us provisional knowledge, because the future might not resemble the past.
55%
Flag icon
induction synthetic because it adds knowledge, but notoriously it can provide no guarantee of truth.
55%
Flag icon
Inductive systems are also brittle, lacking robustness, and do not acquire genuine understanding from data alone.
55%
Flag icon
Induction is not a path to general intelligence.
55%
Flag icon
Neither deduction nor induction illuminates this core mystery of human intelligence.
55%
Flag icon
The abductive inference that Peirce proposed long ago does, but we don’t know how to program it.
55%
Flag icon
We are thus not on a path to artificial general intelligence—at least, not yet—in spite of recent proclamations to the contrary. We are s...
This highlight has been truncated due to consecutive passage length restrictions.
58%
Flag icon
with simple, one-sentence questions, the meanings of words—alligators, hurdles—and the meanings of the relations between things—smaller than—frustrate techniques relying on data and frequency without understanding.
59%
Flag icon
polysemous
59%
Flag icon
long-tail problem of statistical, or inductive, approaches that get worse on less likely examples or interpretations. This is yet another way of saying that likelihood is not the same as genuine understanding. It’s not even in the same conceptual space. Here again, the frequency assumption represents real limitations to getting to artificial general intelligence.
59%
Flag icon
“Good enough” results from modern AI are themselves a kind of trick, an illusion that masks the need for understanding when reading or conversing.
59%
Flag icon
The point is that accuracy is itself contextual, and on tests that expose the absence of any understanding, getting six out of ten answers correct (as with state-of-the-art systems) isn’t progress at all. It’s evidence of disguised idiocy.
59%
Flag icon
pragmatic phenomena in conversational dialogue.
61%
Flag icon
Anaphora, mentioned earlier, means “referring back,” as in The ship left the harbor in May. Roger was on it. Here, the pronoun it refers back to the ship.3 Anaphora (and its cousin, cataphora, or “referring forward”),
68%
Flag icon
scientist who spent his life’s work exploring the mystery of human intelligence knew all too well that machines were, by design, poor and unsuited replacements. Swift’s fantasies held wisdom.
69%
Flag icon
They really are all possessed by Prometheus, by what innovators can dream and achieve.
69%
Flag icon
Why sacrifice our belief in human innovation, if we don’t have to?
69%
Flag icon
The ploy is, ironically, conservative; when smartphones are seen as evolving into superintelligence, then radical invention becomes unnecessary. We keep in place designs and ideas that benefit the status quo, all the while talking of unbridled “progress.”
70%
Flag icon
“Human nature is not a machine to be built after a model, and set to do exactly the work prescribed for it, but a tree, which requires to grow and develop itself on all sides, according to the tendency of the inward forces which make it a living thing.”
70%
Flag icon
(The “hive mind” didn’t even give us Wikipedia—most of the real writing is done by singular experts, with others performing more mundane editing tasks.)
70%
Flag icon
Scientists and other members of the intelligentsia eventually pointed out that science without theory doesn’t make sense, since theoretical “models” or frameworks precede big data analysis and give machine learning something specific to do, to analyze.
71%
Flag icon
The intelligentsia began extolling a coming AI that would blog and write news for us. Next, they would replace us. In retrospect, Lanier’s worry
71%
Flag icon
But computers don’t have insights. People do. And collaborative efforts are only effective when individuals are valued.
71%
Flag icon
Stupefyingly, we now disparage Einstein to make room for talking-up machinery.
73%
Flag icon
technology is downstream of theory, and information technology especially is filling in as a replacement for innovation,
73%
Flag icon
It is probably true—or at least it’s reasonable to assume—that inadequate neuroscience knowledge is one of the key reasons we don’t yet have better theories about the nature of our minds.
73%
Flag icon
This is, of course, a core conceit in mythology about AI itself: that insights, theories, and hypotheses unknown and even unknowable at smaller scales will somehow pop into existence once we collect and analyze enough data,
73%
Flag icon
we can’t wait for theory to come from discovery and experiment. We have to place our faith in the supremacy of computational over human intelligence—astoundingly, even in the face of an ongoing theoretical mystery about how to
73%
Flag icon
imbue computers with flexible intelligence in the first place.
73%
Flag icon
mistakenly believing that the data itself will provide answers as the complexity of the simulation grows—an original conceit of big data.
74%
Flag icon
Rather, it is a reminder that AI—here, big data—works only when we have prior theory.
75%
Flag icon
“society is about to experience an epidemic of false positives coming out of big-data projects.”