More on this book
Kindle Notes & Highlights
Read between
March 13, 2024 - July 30, 2025
We can always hope that advances in feature engineering or algorithm design will lead to a more complete theory of computational inference in the future. But we should be profoundly skeptical.
This is another way of saying what philosophers and scientists of every stripe have learned long ago: Induction is not enough.
A saturated model is final, and won’t improve any more by adding more data. It might even get worse in some cases, although the reasons are too technical to be explained here.
“And the answer is, it’s still improving—but we are getting to the point where we get less benefit than we did in the past.”13
But self-driving cars, once thought to be around the corner, are still in a heavy research phase, and no doubt part of the problem is the training data from labeled video feeds, which is not insufficient in volume but is inadequate to handle long tail problems with atypical driving scenarios that nonetheless must be factored in for safety.
to get machines to play games like Go, most notably Monte Carlo Tree Search … random sampling from a tree of different game possibilities, which has nothing intrinsic to do with deep learning.
that human knowledge wasn’t involved simply wasn’t factually accurate.”
DeepMind team used human inferences—namely, abductive ones—to design the system to successfully accomplish its task. These inferences were supplied from outside the inductive framework.
AI. If deduction is inadequate, and induction is inadequate, then we must have a theory of abduction. Since we don’t (yet), we can already
conclude that we are not on a path to artificial general intelligence.
knowledge representation and reasoning (KR&R), which
because we can’t infer what we don’t know, and we can’t make use of knowledge that we have without a suitable inference capability.
Daniel Kahneman. In his 2011 best seller, Thinking, Fast and Slow,
Type 1 and Type 2. Type 1 thinking is fast and reflexive, while Type 2 thinking involves more time-consuming and deliberate computations.
more mindful, cautious, and questioning thinkers. Type 1 has a way of crowding out Type 2, which often leads us into fallacies and biases.
Perceived threats are quick inferences, to be sure, but they’re still inferences.
inference (fast or slow) is noetic, or knowledge-based.
deductive inference ignores considerations of relevance.
Second, inductive inference gives us provisional knowledge, because the future might not resemble the past.
induction synthetic because it adds knowledge, but notoriously it can provide no guarantee of truth.
Inductive systems are also brittle, lacking robustness, and do not acquire genuine understanding from data alone.
Induction is not a path to general intelligence.
Neither deduction nor induction illuminates this core mystery of human intelligence.
The abductive inference that Peirce proposed long ago does, but we don’t know how to program it.
We are thus not on a path to artificial general intelligence—at least, not yet—in spite of recent proclamations to the contrary. We are s...
This highlight has been truncated due to consecutive passage length restrictions.
with simple, one-sentence questions, the meanings of words—alligators, hurdles—and the meanings of the relations between things—smaller than—frustrate techniques relying on data and frequency without understanding.
polysemous
long-tail problem of statistical, or inductive, approaches that get worse on less likely examples or interpretations. This is yet another way of saying that likelihood is not the same as genuine understanding. It’s not even in the same conceptual space. Here again, the frequency assumption represents real limitations to getting to artificial general intelligence.
“Good enough” results from modern AI are themselves a kind of trick, an illusion that masks the need for understanding when reading or conversing.
The point is that accuracy is itself contextual, and on tests that expose the absence of any understanding, getting six out of ten answers correct (as with state-of-the-art systems) isn’t progress at all. It’s evidence of disguised idiocy.
pragmatic phenomena in conversational dialogue.
Anaphora, mentioned earlier, means “referring back,” as in The ship left the harbor in May. Roger was on it. Here, the pronoun it refers back to the ship.3 Anaphora (and its cousin, cataphora, or “referring forward”),
scientist who spent his life’s work exploring the mystery of human intelligence knew all too well that machines were, by design, poor and unsuited replacements. Swift’s fantasies held wisdom.
They really are all possessed by Prometheus, by what innovators can dream and achieve.
Why sacrifice our belief in human innovation, if we don’t have to?
The ploy is, ironically, conservative; when smartphones are seen as evolving into superintelligence, then radical invention becomes unnecessary. We keep in place designs and ideas that benefit the status quo, all the while talking of unbridled “progress.”
“Human nature is not a machine to be built after a model, and set to do exactly the work prescribed for it, but a tree, which requires to grow and develop itself on all sides, according to the tendency of the inward forces which make it a living thing.”
(The “hive mind” didn’t even give us Wikipedia—most of the real writing is done by singular experts, with others performing more mundane editing tasks.)
Scientists and other members of the intelligentsia eventually pointed out that science without theory doesn’t make sense, since theoretical “models” or frameworks precede big data analysis and give machine learning something specific to do, to analyze.
The intelligentsia began extolling a coming AI that would blog and write news for us. Next, they would replace us. In retrospect, Lanier’s worry
But computers don’t have insights. People do. And collaborative efforts are only effective when individuals are valued.
Stupefyingly, we now disparage Einstein to make room for talking-up machinery.
technology is downstream of theory, and information technology especially is filling in as a replacement for innovation,
It is probably true—or at least it’s reasonable to assume—that inadequate neuroscience knowledge is one of the key reasons we don’t yet have better theories about the nature of our minds.
This is, of course, a core conceit in mythology about AI itself: that insights, theories, and hypotheses unknown and even unknowable at smaller scales will somehow pop into existence once we collect and analyze enough data,
we can’t wait for theory to come from discovery and experiment. We have to place our faith in the supremacy of computational over human intelligence—astoundingly, even in the face of an ongoing theoretical mystery about how to
imbue computers with flexible intelligence in the first place.
mistakenly believing that the data itself will provide answers as the complexity of the simulation grows—an original conceit of big data.
Rather, it is a reminder that AI—here, big data—works only when we have prior theory.
“society is about to experience an epidemic of false positives coming out of big-data projects.”