The Myth of Artificial Intelligence: Why Computers Can’t Think the Way We Do
Rate it:
Open Preview
7%
Flag icon
And here we have it: Turing’s great genius, and his great error, was in thinking that human intelligence reduces to problem-solving.
7%
Flag icon
Guessing is known in science as forming hypotheses
11%
Flag icon
supercomputers without real intelligence just get to wrong answers more quickly.
12%
Flag icon
“Once we get to human-levels of intelligence, the system can design a smarter-than-human version of itself,” so the hope goes. But, we already have “human-level” intelligence—we’re human. Can we do this? What are the intelligence explosion promoters really talking about?
12%
Flag icon
themes, and thus conveniently absolves individual scientists from the responsibility of needing to make scientific breakthroughs or develop revolutionary ideas.
15%
Flag icon
Imitation Game, the Turing test. Dartmouth scientists, too, thought
20%
Flag icon
Here, it’s best to be clear: equating a mind with a computer is not scientific, it’s philosophical.
21%
Flag icon
Admitting complexity—and complications—gets us further than does easy oversimplification.
30%
Flag icon
The fact that conjectures lead to discoveries doesn’t fit with mechanical accounts of science; to the contrary, it contradicts them. But detective work, scientific discovery, innovation, and common sense are all workings of the mind; they are all inferences that AI scientists in search of generally intelligent machines must somehow account for. As you can
31%
Flag icon
Soundness tells us that the premises really are “true.”
33%
Flag icon
Knowledge gleaned from observations is always provisional.
33%
Flag icon
(The law of large numbers tells us that given a large enough sample, the probability will approach the actual probability: across a million coin flips, the fifty-fifty split will be quite close).
34%
Flag icon
As Hume put it, relying on induction requires us to believe that “instances of which we have had no experience resemble those of which we have had experience.”
34%
Flag icon
just works out that the world has certain characteristics, and we can examine the world and tease out the knowledge that (we think) we have about
34%
Flag icon
Correlations might suggest an underlying cause we can rely on (a bit of real knowledge), but we might have missed something when testing and observing what affects what. The correlation might be spurious, or accidental. We might have been looking for the wrong thing. The sample size might be too small or unrepresentative for reasons that only become apparent later.
34%
Flag icon
At root, all induction is based on enumeration.
34%
Flag icon
Hume put it, we need to see “constant correlations” to infer causes, and we need to see enumerated examples to infer categories or types.
34%
Flag icon
And new inductive inferences in the sciences inevitably build on older ones scientists now believe to be solid and true.
34%
Flag icon
Induction doesn’t require knowledge about causes (in that case it wouldn’t be enumerative).
34%
Flag icon
Hypotheses that cite specific causes are the goal of observation, but unfortunately the logical resources of induction are inadequate to supply them.
34%
Flag icon
It is often misunderstood, too, which contributes to a general overconfidence that induction ensures “scientific”
34%
Flag icon
because what we think we know can prevent us from seeing anything new.
34%
Flag icon
have to understand the significance of what we observe.
34%
Flag icon
All the devil’s details are in the novelty, which i...
This highlight has been truncated due to consecutive passage length restrictions.
34%
Flag icon
newly discovered facts can surprise us.
35%
Flag icon
science does not accrue knowledge by collecting or enumerating facts.
35%
Flag icon
we don’t gain scientific knowledge solely by induction. In fact, induction by itself is hopelessly flawed.
35%
Flag icon
Induction isn’t just incomplete, it positively cannot confirm scientific theories or beliefs by enumerating observations.
35%
Flag icon
This is an example of our craving to apply the inductive fallacy even to random events.
35%
Flag icon
An inductive inference with true premises has led to a false conclusion.
35%
Flag icon
“habits of association”
35%
Flag icon
But knowledge is often disguised belief—what we think we know can be wrong.
36%
Flag icon
approaches that learn from experience of gameplay work so well.
36%
Flag icon
induces hypotheses about the best moves to make on the board given its position and the opponent’s.
36%
Flag icon
Computer scientists relying on inductive methods often dismiss Hume’s (or Russell’s) problem of induction as irrelevant.
36%
Flag icon
This response misses the point. A method known as “probably approximately correct” governs hypothesis formation for statistical AI, like machine learning, and is known to be effective for weeding out bad or false hypotheses over time.
36%
Flag icon
humans solve the problem of inference not with inductive inference in some stronger form, but by combining it somehow with more powerful types of inference that contribute to understanding.
36%
Flag icon
Induction casts intelligence as the detection of regularity. Statistical AI excels at capturing regularities by analyzing data,
36%
Flag icon
Machine learning is inductive because it acquires knowledge from observation of data.
36%
Flag icon
to learn statistical regularities in the dataset rather than higher-level abstract concepts.”
36%
Flag icon
Because many examples are required to boost learning (in the case of Go, the example games run into the millions), the systems are glorified enumerative induction
36%
Flag icon
constraints of the game features and rules of play.
36%
Flag icon
Thinking in the real world depends on the sensitive detection of abnormality, or exceptions.
37%
Flag icon
would have prior knowledge, supplied by humans.
37%
Flag icon
In fact, by focusing on “easy” successes exploiting regularities, AI research is in danger of collectively moving away from progress toward general intelligence.
37%
Flag icon
Inductive strategies by themselves give false hope.
37%
Flag icon
that machine learning can never supply real understanding because the analysis of data does not bridge to knowledge of the causal structure of the real world,
42%
Flag icon
Alas, the human-supplied feature cannot be added to other photos not prepared this way, so the feature is not syntactically extractable, and is therefore useless.
43%
Flag icon
and cannot be addressed by machine learning limited by
43%
Flag icon
All of this is to say that data alone, big data or not, and inductive methods like machine learning have inherent limitations that constitute roadblocks to progress in AI.
« Prev 1 3