How We Know What Isn't So (A Psychological Study on Logic)
Rate it:
1%
Flag icon
It ain’t so much the things we don’t know that get us into trouble. It’s the things we know that just ain’t so.
2%
Flag icon
People who are charged with deciding who is to be admitted to a distinguished undergraduate institution, a prestigious graduate school, or a select executive training program all think they can make more effective admissions decisions if each candidate is seen in a brief, personal interview. They cannot. Research indicates that decisions based on objective criteria alone are at least as effective as those influenced by subjective impressions formed in an interview.2
Ian Pitchford
Interviews are a standard requirement of the QAA Code of Practice for Research Degrees and yet they don't work!
2%
Flag icon
First, people do not hold questionable beliefs simply because they have not been exposed to the relevant evidence. Erroneous beliefs plague both experienced professionals and less informed laypeople alike. In this respect, the admissions officials and maternity ward nurses should “know better.” They are professionals. They are in regular contact with the data. But they are mistaken.
2%
Flag icon
As these remarks suggest, many questionable and erroneous beliefs have purely cognitive origins, and can be traced to imperfections in our capacities to process information and draw conclusions. We hold many dubious beliefs, in other words, not because they satisfy some important psychological need, but because they seem to be the most sensible conclusions consistent with the available evidence.
3%
Flag icon
That our mistaken beliefs about aphrodisiacs and cancer cures have brought a number of species to the brink of extinction should challenge our own species to do better—to insist on clearer thinking and the effort required to obtain more valid beliefs about the world. “A little superstition” is a luxury we should not be allowed and can ill afford.
4%
Flag icon
“When people learn no tools of judgment and merely follow their hopes, the seeds of political manipulation are sown.”11
4%
Flag icon
In 1677, Baruch Spinoza wrote his famous words, “Nature abhors a vacuum,” to describe a host of physical phenomena. Three hundred years later, it seems that his statement applies as well to human nature, for it too abhors a vacuum. We are predisposed to see order, pattern, and meaning in the world, and we find randomness, chaos, and meaninglessness unsatisfying.
Ian Pitchford
A satisfying comparison.
4%
Flag icon
Human nature abhors a lack of predictability and the absence of meaning. As a consequence, we tend to “see” order where there is none, and we spot meaningful patterns where only the vagaries of chance are operating.
4%
Flag icon
Nature has no rooting interest. The same is largely true of human nature as well. Often we impose order even when there is no motive to do so. We do not “want” to see a man in the moon. We do not profit from the illusion. We just see it.
4%
Flag icon
It may have been bred into us through evolution because of its general adaptiveness: We can capitalize on ordered phenomena in ways that we cannot on those that are random.
Ian Pitchford
And to evolve this tendency would only need to confer a slight advantage over not perceiving order in things - and the environment of evolutionary adaptedness would have been far less complex than our modern environment. In moral terms this insight makes it easier to forgive some aspects of superstition, but in practical terms implies that superstition will be hard to combat, requiring endless cognitive vigilance.
5%
Flag icon
Many of the mechanisms that distort our judgments stem from basic cognitive processes that are usually quite helpful in accurately perceiving and understanding the world.
5%
Flag icon
Clearly, the tendency to look for order and to spot patterns is enormously helpful, particularly when we subject whatever hunches it generates to further, more rigorous test (as both Semmelweis and Darwin did, for example). Many times, however, we treat the products of this tendency not as hypotheses, but as established facts.
5%
Flag icon
The predisposition to impose order can be so automatic and so unchecked that we often end up believing in the existence of phenomena that just aren’t there.
6%
Flag icon
Contrary to the expectations expressed by our sample of fans, players were not more likely to make a shot after making their last one, two, or three shots than after missing their last one, two, or three shots. In fact, there was a slight tendency for players to shoot better after missing their last shot.
8%
Flag icon
This qualification aside, why do people believe in the hot hand when it does not exist? There are at least two possible explanations. The first involves the tendency for people’s preconceptions to bias their interpretations of what they see. Because people have theories about how confidence affects performance, they may expect to see streak shooting even before watching their first basketball game.
8%
Flag icon
A second explanation involves a process that appears to be more fundamental, and thus operates even in the absence of any explicit theories people might have. Psychologists have discovered that people have faulty intuitions about what chance sequences look like.5
8%
Flag icon
People expect sequences of coin flips, for example, to alternate between heads and tails more than they actually do.
8%
Flag icon
The intuition that random events such as coin flips should alternate between heads and tails more than they do has been described by statisticians as a “clustering illusion.” Random distributions seem to us to have too many clusters or streaks of consecutive outcomes of the same type, and so we have difficulty accepting their true origins.
Ian Pitchford
An inbuilt tendency to interpret clustering as an indication of a causal process at work.
9%
Flag icon
It is only through the kind of objective assessment we performed that the illusion can be overcome.
Ian Pitchford
And this doesn't bode particularly well.
9%
Flag icon
The best explanation to date of the misperception of random sequences is offered by psychologists Daniel Kahneman and Amos Tversky, who attribute it to people’s tendency to be overly influenced by judgments of “representativeness.”8
9%
Flag icon
Representativeness can be thought of as the reflexive tendency to assess the similarity of outcomes, instances, and categories on relatively salient and even superficial features, and then to use these assessments of similarity as a basis of judgment.
9%
Flag icon
We expect effects to look like their causes; thus, we are more likely to attribute a case of heartburn to spicy rather than bland food, and we are more inclined to see jagged handwriting as a sign of a tense rather than a relaxed personality.
9%
Flag icon
The clustering illusion thus stems from a form of over-generalization: We expect the correct proportion of heads and tails or hits and misses to be present not only globally in a long sequence, but also locally in each of its parts.
Ian Pitchford
We over-generalise from very small samples.
10%
Flag icon
To the scientist, such apparent anomalies merely suggest hypotheses that are subsequently tested on other, independent sets of data. Only if the anomaly persists is the hypothesis to be taken seriously.
11%
Flag icon
Furthermore, once we suspect that a phenomenon exists, we generally have little trouble explaining why it exists or what it means. People are extraordinarily good at ad hoc explanation. According to past research, if people are erroneously led to believe that they are either above or below average at some task, they can explain either their superior or inferior performance with little difficulty.12
11%
Flag icon
To live, it seems, is to explain, to justify, and to find coherence among diverse outcomes, characteristics, and causes.
Ian Pitchford
We routinely and effortlessly cement our perceptions with causal theories.
11%
Flag icon
It is as if the left hemisphere contains an explanation module along with, or as part of, its language center—an explanation module that can quickly and easily make sense of even the most bizarre patterns of information.15
Ian Pitchford
An explanation module in the left hemisphere.
11%
Flag icon
It suggests that once a person has (mis)identified a random pattern as a “real” phenomenon, it will not exist as a puzzling, isolated fact about the world. Rather, it is quickly explained and readily integrated into the person’s pre-existing theories and beliefs. These theories, furthermore, then serve to bias the person’s evaluation of new information in such a way that the initial belief becomes solidly entrenched.
12%
Flag icon
People have more difficulty, however, acquiring a truly general and deep understanding that whenever any two variables are imperfectly correlated, extreme values of one of the variables are matched, on the average, by less extreme values of the other.
Ian Pitchford
It's difficult to have a deep appreciation of statistical regression.
12%
Flag icon
First, people tend to be insufficiently conservative or “regressive” when making predictions. Parents expect a child who excels in school one year to do as well or better the following year; shareholders expect a company that has had a banner year to earn as much or more the next.
12%
Flag icon
This tendency for people’s predictions to be insufficiently regressive has been implicated in the high rate of business failures, in disastrous personnel hiring decisions, and in non-conservative risk estimates made by certified public accountants.
12%
Flag icon
Statistical theory dictates that the better one’s basis of prediction, the less regressive one needs to be.
12%
Flag icon
This tendency to make non-regressive predictions, like the clustering illusion, can be attributed to the compelling nature of judgment by representativeness. In this case, people’s judgments reflect the intuition that the prediction ought to resemble the predictor as much as possible, and thus that it should deviate from the average to the same extent.
12%
Flag icon
A second, related problem that people have with regression is known as the regression fallacy. The regression fallacy refers to the tendency to fail to recognize statistical regression when it occurs, and instead to “explain” the observed phenomena with superfluous and often complicated causal theories. A lesser performance that follows a brilliant one is attributed to slacking off; a slight improvement in felony statistics following a crime wave is attributed to a new law enforcement policy.
13%
Flag icon
Athletes’ performances at different times are imperfectly correlated. Thus, due to regression alone, we can expect an extraordinarily good performance to be followed, on the average, by a somewhat less extraordinary performance.
13%
Flag icon
The regression fallacy also plays a role in shaping parents’ and teachers’ beliefs about the relative effectiveness of reward and punishment in producing desired behavior and learning.
Ian Pitchford
Reward and punishment in the context of the regression fallacy.
13%
Flag icon
that rewarding desirable responses is generally more effective in shaping behavior than punishing undesirable responses.
13%
Flag icon
One explanation for this discrepancy between common practice and the recommendation of psychologists is that regression effects may mask the true effectiveness of reward, and spuriously boost the apparent effectiveness of punishment.
13%
Flag icon
Regression effects, in other words, serve to “punish the administration of reward, and to reward the administration of punishment.”21
13%
Flag icon
Perhaps the reader has anticipated how the two difficulties discussed in this chapter—the clustering illusion and the regression fallacy—can combine to produce firmly-held, but questionable beliefs. In particular, they may combine to produce a variety of superstitious beliefs about how to end a bad streak or how to prolong a good
14%
Flag icon
Examples like this illustrate how the misperception of random sequences and the misinterpretation of regression can lead to the formation of superstitious beliefs. Furthermore, these beliefs and how they are accounted for do not remain as isolated convictions, but serve to bolster or create more general beliefs—in this case about the wisdom of religious officials, the “proper” role of women in society, and even the existence of a powerful and watchful god.
Ian Pitchford
Clustering and the regression fallacy have a lot to answer for.
14%
Flag icon
They still cling stubbornly to the idea that the only good answer is a yes answer. If they say, “Is the number between 5,000 and 10,000?” and I say yes, they cheer; if I say no, they groan, even though they get exactly the same amount of information in either case.
14%
Flag icon
Such convictions are on the right track. Evidence of the type mentioned in these statements is certainly necessary for the beliefs to be true. If a phenomenon exists, there must be some positive evidence of its existence—“instances” of its existence must be visible to oneself or to others. But it should be clear that such evidence is hardly sufficient to warrant such beliefs.
14%
Flag icon
Because people often fail to recognize that a particular belief rests on inadequate evidence, the belief enjoys an “illusion of validity”1 and is considered, not a matter of opinion or values, but a logical conclusion from the objective evidence that any rational person would make.
15%
Flag icon
To adequately assess whether adoption leads to conception, it is necessary to compare the probability of conception after adopting a/(a+b), with the probability of conception after not adopting, c/(c+d). There is now a large literature on how well people evaluate this kind of information in assessing the presence or strength of such relationships.2
15%
Flag icon
The most likely reason for the excessive influence of confirmatory information is that it is easier to deal with cognitively. Consider someone trying to determine whether cloud seeding produces rain. An instance in which cloud seeding is followed by rain is clearly relevant to the issue in question—it registers as an unambiguous success for cloud seeding. In contrast, an instance in which it rains in the absence of cloud seeding is only indirectly relevant—it is neither a success nor a failure. Rather, it represents a consequence of not seeding that serves only as part of a baseline against ...more
15%
Flag icon
Non-confirmatory information can also be harder to deal with because it is usually framed negatively (e.g., it rained when we did not seed), and we sometimes have trouble conceptualizing negative assertions. Compare, for example, how much easier it is to comprehend the statement “All Greeks are mortals” than “All non-mortals are non-Greeks.” Thus, one would expect confirmatory information to be particularly influential whenever the disconfirmations are framed as negations. The research literature strongly supports this prediction.
Ian Pitchford
We have trouble conceptualising negative assertions.
16%
Flag icon
The influence of confirmatory information is particularly strong when both variables are asymmetric because in such cases three of the four cells contain information about the nonoccurrence of one of the variables, and, once again, such negative or null instances have been shown to be particularly difficult to process.4
16%
Flag icon
The Tendency to Seek Confirmatory Information.
16%
Flag icon
information consistent with a hypothesis need not stem from any desire for the hypothesis to be true. The people in this experiment surely did not care whether all cards with vowels on one side had even numbers on the other; they sought information consistent with
Ian Pitchford
A bias to seek confirmatory information even when there is no desire for the hypothesis to be true.
« Prev 1 3 4