More on this book
Community
Kindle Notes & Highlights
the combination of a mental shotgun with intensity matching explains why we have intuitive judgments about many things that we know little about.
The target question is the assessment you intend to produce. The heuristic question is the simpler question that you answer instead.
The automatic processes of the mental shotgun and intensity matching often make available one or more answers to easy questions that could be mapped onto the target question.
Any emotionally significant question that alters a person’s mood will have the same effect. WYSIATI. The present state of mind looms very large when people evaluate their happiness.
We see here a new side of the “personality” of System 2. Until now I have mostly described it as a more or less acquiescent monitor, which allows considerable leeway to System 1. I have also presented System 2 as active in deliberate memory search, complex computations, comparisons, planning, and choice. In the bat-and-ball problem and in many other examples of the interplay between the two systems, it appeared that System 2 is ultimately in charge, with the ability to resist the suggestions of System 1, slow things down, and impose logical analysis. Self-criticism is one of the functions of
...more
If you are the researcher, this outcome is costly to you because you have wasted time and effort, and failed to confirm a hypothesis that was in fact true. Using a sufficiently large sample is the only way to reduce the risk. Researchers who pick too small a sample leave themselves at the mercy of sampling luck. The risk of error can be estimated for any given sample size by a fairly simple procedure. Traditionally, however, psychologists do not use calculations to decide on a sample size. They use their judgment, which is commonly flawed.
The simple answer to these questions is that if you follow your intuition, you will more often than not err by misclassifying a random event as systematic. We are far too willing to reject the belief that much of what we see in life is random.
The phenomenon we were studying is so common and so important in the everyday world that you should know its name: it is an anchoring effect. It occurs when people consider a particular value for an unknown quantity before estimating that quantity.
“well-intentioned child who turns down exceptionally loud music to meet a parent’s demand that it be played at a ‘reasonable’ volume may fail to adjust sufficiently from a high anchor, and may feel that genuine attempts at compromise are being overlooked.” The driver and the child both deliberately adjust down, and both fail to adjust enough.
The puzzle that defeated us is now solved, because the concept of suggestion is no longer obscure: suggestion is a priming effect, which selectively evokes compatible evidence.
If the content of a screen saver on an irrelevant computer can affect your willingness to help strangers without your being aware of it, how free are you?
The availability heuristic, like other heuristics of judgment, substitutes one question for another: you wish to estimate the size of a category or the frequency of an event, but you report an impression of the ease with which instances come to mind. Substitution of questions inevitably produces systematic errors.
One of the best-known studies of availability suggests that awareness of your own biases can contribute to peace in marriages, and probably in other joint projects.
The mere observation that there is usually more than 100% credit to go around is sometimes sufficient to defuse the situation.
If you cannot easily come up with instances of meek behavior, you are likely to conclude that you are not meek at all. Self-ratings were dominated by the ease with which examples had come to mind. The experience of fluent retrieval of instances trumped the number retrieved.
frowning normally accompanies cognitive strain and the effect is symmetric: when people are instructed to frown while doing a task, they actually try harder and experience greater cognitive strain.
The results suggest that the participants make an inference: if I am having so much more trouble than expected coming up with instances of my assertiveness, then I can’t be very assertive. Note that this inference rests on a surprise—fluency being worse than expected. The availability heuristic that the subjects apply is better described as an “unexplained unavailability” heuristic.
As predicted, participants whose experience of fluency was “explained” did not use it as a heuristic; the subjects who were told that music would make retrieval more difficult rated themselves as equally assertive when they retrieved twelve instances as when they retrieved six.
Multiple lines of evidence converge on the conclusion that people who let themselves be guided by System 1 are more strongly susceptible to availability biases than others who are in a state of higher vigilance. The following are some conditions in which people “go with the flow” and are affected more strongly by ease of retrieval than by the content they retrieved: when they are engaged in another effortful task at the same time when they are in a good mood because they just thought of a happy episode in their life if they score low on a depression scale if they are knowledgeable novices on
...more
Merely reminding people of a time when they had power increases their apparent trust in their own intuition.
estimates of causes of death are warped by media coverage. The coverage is itself biased toward novelty and poignancy. The media do not just shape what the public is interested in, but also are shaped by it.
The affect heuristic is an instance of substitution, in which the answer to an easy question (How do I feel about it?) serves as an answer to a much harder question (What do I think about it?).
“The emotional tail wags the rational dog.” The affect heuristic simplifies our lives by creating a world that is much tidier than reality. Good technologies have few costs in the imaginary world we inhabit, bad technologies have no benefits, and all decisions are easy. In the real world, of course, we often face painful tradeoffs between benefits and costs.
“Risk” does not exist “out there,” independent of our minds and culture, waiting to be measured. Human beings have invented the concept of “risk” to help them understand and cope with the dangers and uncertainties of life. Although these dangers are real, there is no such thing as “real risk” or “objective risk.”
that the evaluation of the risk depends on the choice of a measure—with the obvious possibility that the choice may have been guided by a preference for one outcome or another. He goes on to conclude that “defining risk is thus an exercise in power.”
risks should be guided by rational weighting of costs and benefits, and that the natural units for this analysis are the number of lives saved (or perhaps the number of life-years saved, which gives more weight to saving the young) and the dollar cost to the economy. Poor regulation is wasteful of lives and money, both of which can be measured objectively.
the mechanism through which biases flow into policy: the availability cascade.
The cycle is sometimes sped along deliberately by “availability entrepreneurs,” individuals or organizations who work to ensure a continuous flow of worrying news. The danger is increasingly exaggerated as the media compete for attention-grabbing headlines. Scientists and others who try to dampen the increasing fear and revulsion attract little attention, most of it hostile: anyone who claims that the danger is overstated is suspected of association with a “heinous cover-up.” The issue becomes politically important because it is on everyone’s mind, and the response of the political system is
...more
The combination of probability neglect with the social mechanisms of availability cascades inevitably leads to gross exaggeration of minor threats, sometimes with important consequences.
Using base-rate information is the obvious move when no other information is provided.
to focus exclusively on the similarity of the description to the stereotypes—we called it representativeness—ignoring both the base rates and the doubts about the veracity of the description.
The question about probability (likelihood) was difficult, but the question about similarity was easier, and it was answered instead. This is a serious mistake, because judgments of similarity and probability are not constrained by the same logical rules.
Logicians and statisticians have developed competing definitions of probability, all very precise. For laypeople, however, probability (a synonym of likelihood in everyday language) is a vague notion, related to uncertainty, propensity, plausibility, and surprise.
One sin of representativeness is an excessive willingness to predict the occurrence of unlikely (low base-rate) events.
instructing people to “think like a statistician” enhanced the use of base-rate information, while the instruction to “think like a clinician” had the opposite effect.
The second sin of representativeness is insensitivity to the quality of evidence.
Unless you decide immediately to reject evidence (for example, by determining that you received it from a liar), your System 1 will automatically process the information available as if it were true. There is one thing you can do when you have doubts about the quality of the evidence: let your judgments of probability stay close to the base rate. Don’t expect this exercise of discipline to be easy—it requires a significant effort of self-monitoring and self-control.
To be useful, your beliefs should be constrained by the logic of probability.
The relevant “rules” for cases such as the Tom W problem are provided by Bayesian statistics. This influential modern approach to statistics is named after an English minister of the eighteenth century, the Reverend Thomas Bayes, who is credited with the first major contribution to a large problem: the logic of how people should change their mind in the light of evidence. Bayes’s rule specifies how prior beliefs (in the examples of this chapter, base rates) should be combined with the diagnosticity of the evidence, the degree to which it favors the hypothesis over the alternative.
The relevant “rules” for cases such as the Tom W problem are provided by Bayesian statistics. This influential modern approach to statistics is named after an English minister of the eighteenth century, the Reverend Thomas Bayes, who is credited with the first major contribution to a large problem: the logic of how people should change their mind in the light of evidence. Bayes’s rule specifies how prior beliefs (in the examples of this chapter, base rates) should be combined with the diagnosticity of the evidence, the degree to which it favors the hypothesis over the alternative.
The essential keys to disciplined Bayesian reasoning can be simply summarized: Anchor your judgment of the probability of an outcome on a plausible base rate. Question the diagnosticity of your evidence.
The essential keys to disciplined Bayesian reasoning can be simply summarized: Anchor your judgment of the probability of an outcome on a plausible base rate. Question the diagnosticity of your evidence.