More on this book
Kindle Notes & Highlights
Read between
March 13, 2024 - July 30, 2025
And here we have it: Turing’s great genius, and his great error, was in thinking that human intelligence reduces to problem-solving.
Guessing is known in science as forming hypotheses
supercomputers without real intelligence just get to wrong answers more quickly.
“Once we get to human-levels of intelligence, the system can design a smarter-than-human version of itself,” so the hope goes. But, we already have “human-level” intelligence—we’re human. Can we do this? What are the intelligence explosion promoters really talking about?
themes, and thus conveniently absolves individual scientists from the responsibility of needing to make scientific breakthroughs or develop revolutionary ideas.
Imitation Game, the Turing test. Dartmouth scientists, too, thought
Here, it’s best to be clear: equating a mind with a computer is not scientific, it’s philosophical.
Admitting complexity—and complications—gets us further than does easy oversimplification.
The fact that conjectures lead to discoveries doesn’t fit with mechanical accounts of science; to the contrary, it contradicts them. But detective work, scientific discovery, innovation, and common sense are all workings of the mind; they are all inferences that AI scientists in search of generally intelligent machines must somehow account for. As you can
Soundness tells us that the premises really are “true.”
Knowledge gleaned from observations is always provisional.
(The law of large numbers tells us that given a large enough sample, the probability will approach the actual probability: across a million coin flips, the fifty-fifty split will be quite close).
As Hume put it, relying on induction requires us to believe that “instances of which we have had no experience resemble those of which we have had experience.”
just works out that the world has certain characteristics, and we can examine the world and tease out the knowledge that (we think) we have about
Correlations might suggest an underlying cause we can rely on (a bit of real knowledge), but we might have missed something when testing and observing what affects what. The correlation might be spurious, or accidental. We might have been looking for the wrong thing. The sample size might be too small or unrepresentative for reasons that only become apparent later.
At root, all induction is based on enumeration.
Hume put it, we need to see “constant correlations” to infer causes, and we need to see enumerated examples to infer categories or types.
And new inductive inferences in the sciences inevitably build on older ones scientists now believe to be solid and true.
Induction doesn’t require knowledge about causes (in that case it wouldn’t be enumerative).
Hypotheses that cite specific causes are the goal of observation, but unfortunately the logical resources of induction are inadequate to supply them.
It is often misunderstood, too, which contributes to a general overconfidence that induction ensures “scientific”
because what we think we know can prevent us from seeing anything new.
have to understand the significance of what we observe.
All the devil’s details are in the novelty, which i...
This highlight has been truncated due to consecutive passage length restrictions.
newly discovered facts can surprise us.
science does not accrue knowledge by collecting or enumerating facts.
we don’t gain scientific knowledge solely by induction. In fact, induction by itself is hopelessly flawed.
Induction isn’t just incomplete, it positively cannot confirm scientific theories or beliefs by enumerating observations.
This is an example of our craving to apply the inductive fallacy even to random events.
An inductive inference with true premises has led to a false conclusion.
“habits of association”
But knowledge is often disguised belief—what we think we know can be wrong.
approaches that learn from experience of gameplay work so well.
induces hypotheses about the best moves to make on the board given its position and the opponent’s.
Computer scientists relying on inductive methods often dismiss Hume’s (or Russell’s) problem of induction as irrelevant.
This response misses the point. A method known as “probably approximately correct” governs hypothesis formation for statistical AI, like machine learning, and is known to be effective for weeding out bad or false hypotheses over time.
humans solve the problem of inference not with inductive inference in some stronger form, but by combining it somehow with more powerful types of inference that contribute to understanding.
Induction casts intelligence as the detection of regularity. Statistical AI excels at capturing regularities by analyzing data,
Machine learning is inductive because it acquires knowledge from observation of data.
to learn statistical regularities in the dataset rather than higher-level abstract concepts.”
Because many examples are required to boost learning (in the case of Go, the example games run into the millions), the systems are glorified enumerative induction
constraints of the game features and rules of play.
Thinking in the real world depends on the sensitive detection of abnormality, or exceptions.
would have prior knowledge, supplied by humans.
In fact, by focusing on “easy” successes exploiting regularities, AI research is in danger of collectively moving away from progress toward general intelligence.
Inductive strategies by themselves give false hope.
that machine learning can never supply real understanding because the analysis of data does not bridge to knowledge of the causal structure of the real world,
Alas, the human-supplied feature cannot be added to other photos not prepared this way, so the feature is not syntactically extractable, and is therefore useless.
and cannot be addressed by machine learning limited by
All of this is to say that data alone, big data or not, and inductive methods like machine learning have inherent limitations that constitute roadblocks to progress in AI.