More on this book
Community
Kindle Notes & Highlights
It ain’t so much the things we don’t know that get us into trouble. It’s the things we know that just ain’t so.
People who are charged with deciding who is to be admitted to a distinguished undergraduate institution, a prestigious graduate school, or a select executive training program all think they can make more effective admissions decisions if each candidate is seen in a brief, personal interview. They cannot. Research indicates that decisions based on objective criteria alone are at least as effective as those influenced by subjective impressions formed in an interview.2
Interviews are a standard requirement of the QAA Code of Practice for Research Degrees and yet they don't work!
First, people do not hold questionable beliefs simply because they have not been exposed to the relevant evidence. Erroneous beliefs plague both experienced professionals and less informed laypeople alike. In this respect, the admissions officials and maternity ward nurses should “know better.” They are professionals. They are in regular contact with the data. But they are mistaken.
As these remarks suggest, many questionable and erroneous beliefs have purely cognitive origins, and can be traced to imperfections in our capacities to process information and draw conclusions. We hold many dubious beliefs, in other words, not because they satisfy some important psychological need, but because they seem to be the most sensible conclusions consistent with the available evidence.
That our mistaken beliefs about aphrodisiacs and cancer cures have brought a number of species to the brink of extinction should challenge our own species to do better—to insist on clearer thinking and the effort required to obtain more valid beliefs about the world. “A little superstition” is a luxury we should not be allowed and can ill afford.
“When people learn no tools of judgment and merely follow their hopes, the seeds of political manipulation are sown.”11
In 1677, Baruch Spinoza wrote his famous words, “Nature abhors a vacuum,” to describe a host of physical phenomena. Three hundred years later, it seems that his statement applies as well to human nature, for it too abhors a vacuum. We are predisposed to see order, pattern, and meaning in the world, and we find randomness, chaos, and meaninglessness unsatisfying.
Human nature abhors a lack of predictability and the absence of meaning. As a consequence, we tend to “see” order where there is none, and we spot meaningful patterns where only the vagaries of chance are operating.
Nature has no rooting interest. The same is largely true of human nature as well. Often we impose order even when there is no motive to do so. We do not “want” to see a man in the moon. We do not profit from the illusion. We just see it.
It may have been bred into us through evolution because of its general adaptiveness: We can capitalize on ordered phenomena in ways that we cannot on those that are random.
And to evolve this tendency would only need to confer a slight advantage over not perceiving order in things - and the environment of evolutionary adaptedness would have been far less complex than our modern environment. In moral terms this insight makes it easier to forgive some aspects of superstition, but in practical terms implies that superstition will be hard to combat, requiring endless cognitive vigilance.
Many of the mechanisms that distort our judgments stem from basic cognitive processes that are usually quite helpful in accurately perceiving and understanding the world.
Clearly, the tendency to look for order and to spot patterns is enormously helpful, particularly when we subject whatever hunches it generates to further, more rigorous test (as both Semmelweis and Darwin did, for example). Many times, however, we treat the products of this tendency not as hypotheses, but as established facts.
The predisposition to impose order can be so automatic and so unchecked that we often end up believing in the existence of phenomena that just aren’t there.
Contrary to the expectations expressed by our sample of fans, players were not more likely to make a shot after making their last one, two, or three shots than after missing their last one, two, or three shots. In fact, there was a slight tendency for players to shoot better after missing their last shot.
This qualification aside, why do people believe in the hot hand when it does not exist? There are at least two possible explanations. The first involves the tendency for people’s preconceptions to bias their interpretations of what they see. Because people have theories about how confidence affects performance, they may expect to see streak shooting even before watching their first basketball game.
A second explanation involves a process that appears to be more fundamental, and thus operates even in the absence of any explicit theories people might have. Psychologists have discovered that people have faulty intuitions about what chance sequences look like.5
People expect sequences of coin flips, for example, to alternate between heads and tails more than they actually do.
The intuition that random events such as coin flips should alternate between heads and tails more than they do has been described by statisticians as a “clustering illusion.” Random distributions seem to us to have too many clusters or streaks of consecutive outcomes of the same type, and so we have difficulty accepting their true origins.
The best explanation to date of the misperception of random sequences is offered by psychologists Daniel Kahneman and Amos Tversky, who attribute it to people’s tendency to be overly influenced by judgments of “representativeness.”8
Representativeness can be thought of as the reflexive tendency to assess the similarity of outcomes, instances, and categories on relatively salient and even superficial features, and then to use these assessments of similarity as a basis of judgment.
We expect effects to look like their causes; thus, we are more likely to attribute a case of heartburn to spicy rather than bland food, and we are more inclined to see jagged handwriting as a sign of a tense rather than a relaxed personality.
To the scientist, such apparent anomalies merely suggest hypotheses that are subsequently tested on other, independent sets of data. Only if the anomaly persists is the hypothesis to be taken seriously.
Furthermore, once we suspect that a phenomenon exists, we generally have little trouble explaining why it exists or what it means. People are extraordinarily good at ad hoc explanation. According to past research, if people are erroneously led to believe that they are either above or below average at some task, they can explain either their superior or inferior performance with little difficulty.12
It suggests that once a person has (mis)identified a random pattern as a “real” phenomenon, it will not exist as a puzzling, isolated fact about the world. Rather, it is quickly explained and readily integrated into the person’s pre-existing theories and beliefs. These theories, furthermore, then serve to bias the person’s evaluation of new information in such a way that the initial belief becomes solidly entrenched.
People have more difficulty, however, acquiring a truly general and deep understanding that whenever any two variables are imperfectly correlated, extreme values of one of the variables are matched, on the average, by less extreme values of the other.
First, people tend to be insufficiently conservative or “regressive” when making predictions. Parents expect a child who excels in school one year to do as well or better the following year; shareholders expect a company that has had a banner year to earn as much or more the next.
This tendency for people’s predictions to be insufficiently regressive has been implicated in the high rate of business failures, in disastrous personnel hiring decisions, and in non-conservative risk estimates made by certified public accountants.
Statistical theory dictates that the better one’s basis of prediction, the less regressive one needs to be.
This tendency to make non-regressive predictions, like the clustering illusion, can be attributed to the compelling nature of judgment by representativeness. In this case, people’s judgments reflect the intuition that the prediction ought to resemble the predictor as much as possible, and thus that it should deviate from the average to the same extent.
A second, related problem that people have with regression is known as the regression fallacy. The regression fallacy refers to the tendency to fail to recognize statistical regression when it occurs, and instead to “explain” the observed phenomena with superfluous and often complicated causal theories. A lesser performance that follows a brilliant one is attributed to slacking off; a slight improvement in felony statistics following a crime wave is attributed to a new law enforcement policy.
Athletes’ performances at different times are imperfectly correlated. Thus, due to regression alone, we can expect an extraordinarily good performance to be followed, on the average, by a somewhat less extraordinary performance.
that rewarding desirable responses is generally more effective in shaping behavior than punishing undesirable responses.
One explanation for this discrepancy between common practice and the recommendation of psychologists is that regression effects may mask the true effectiveness of reward, and spuriously boost the apparent effectiveness of punishment.
Regression effects, in other words, serve to “punish the administration of reward, and to reward the administration of punishment.”21
Perhaps the reader has anticipated how the two difficulties discussed in this chapter—the clustering illusion and the regression fallacy—can combine to produce firmly-held, but questionable beliefs. In particular, they may combine to produce a variety of superstitious beliefs about how to end a bad streak or how to prolong a good
Examples like this illustrate how the misperception of random sequences and the misinterpretation of regression can lead to the formation of superstitious beliefs. Furthermore, these beliefs and how they are accounted for do not remain as isolated convictions, but serve to bolster or create more general beliefs—in this case about the wisdom of religious officials, the “proper” role of women in society, and even the existence of a powerful and watchful god.
They still cling stubbornly to the idea that the only good answer is a yes answer. If they say, “Is the number between 5,000 and 10,000?” and I say yes, they cheer; if I say no, they groan, even though they get exactly the same amount of information in either case.
Such convictions are on the right track. Evidence of the type mentioned in these statements is certainly necessary for the beliefs to be true. If a phenomenon exists, there must be some positive evidence of its existence—“instances” of its existence must be visible to oneself or to others. But it should be clear that such evidence is hardly sufficient to warrant such beliefs.
Because people often fail to recognize that a particular belief rests on inadequate evidence, the belief enjoys an “illusion of validity”1 and is considered, not a matter of opinion or values, but a logical conclusion from the objective evidence that any rational person would make.
To adequately assess whether adoption leads to conception, it is necessary to compare the probability of conception after adopting a/(a+b), with the probability of conception after not adopting, c/(c+d). There is now a large literature on how well people evaluate this kind of information in assessing the presence or strength of such relationships.2
The most likely reason for the excessive influence of confirmatory information is that it is easier to deal with cognitively. Consider someone trying to determine whether cloud seeding produces rain. An instance in which cloud seeding is followed by rain is clearly relevant to the issue in question—it registers as an unambiguous success for cloud seeding. In contrast, an instance in which it rains in the absence of cloud seeding is only indirectly relevant—it is neither a success nor a failure. Rather, it represents a consequence of not seeding that serves only as part of a baseline against
...more
Non-confirmatory information can also be harder to deal with because it is usually framed negatively (e.g., it rained when we did not seed), and we sometimes have trouble conceptualizing negative assertions. Compare, for example, how much easier it is to comprehend the statement “All Greeks are mortals” than “All non-mortals are non-Greeks.” Thus, one would expect confirmatory information to be particularly influential whenever the disconfirmations are framed as negations. The research literature strongly supports this prediction.
The influence of confirmatory information is particularly strong when both variables are asymmetric because in such cases three of the four cells contain information about the nonoccurrence of one of the variables, and, once again, such negative or null instances have been shown to be particularly difficult to process.4
The Tendency to Seek Confirmatory Information.
information consistent with a hypothesis need not stem from any desire for the hypothesis to be true. The people in this experiment surely did not care whether all cards with vowels on one side had even numbers on the other; they sought information consistent with

