More on this book
Community
Kindle Notes & Highlights
Lawmakers and regulators may be overly responsive to the irrational concerns of citizens, both because of political sensitivity and because they are prone to the same cognitive biases as other citizens.
the mechanism through which biases flow into policy: the availability cascade.
“all heuristics are equal, but availability is more equal...
This highlight has been truncated due to consecutive passage length restrictions.
An availability cascade is a self-sustaining chain of events, which may start from media reports of a relatively minor event and lead up to public panic and large-scale government action.
The Alar tale illustrates a basic limitation in the ability of our mind to deal with small risks: we either ignore them altogether or give them far too much weight—nothing in between.
the amount of concern is not adequately sensitive to the probability of harm; you are imagining the numerator—the tragic story you saw on the news—and not thinking about the denominator.
The combination of probability neglect with the social mechanisms of availability cascades inevitably leads to gross exaggeration of minor threats, sometimes with important consequences.
Democracy is inevitably messy, in part because the availability and affect heuristics that guide citizens’ beliefs and attitudes are inevitably biased, even if they generally point in the right direction.
Using base-rate information is the obvious move when no other information is provided.
Substitution was perfect in this case: there was no indication that the participants did anything else but judge representativeness.
This is a serious mistake, because judgments of similarity and probability are not constrained by the same logical rules.
anyone who ignores base rates and the quality of evidence in probability assessments will certainly make mistakes.
For laypeople, however, probability (a synonym of likelihood in everyday language) is a vague notion, related to uncertainty, propensity, plausibility, and surprise.
A question about probability or likelihood activates a mental shotgun, evoking answers to easier questions.
Judging probability by representativeness has important virtues: the intuitive impressions that it produces are often—indeed, usually—more accurate than chance guesses would be.
In other situations, the stereotypes are false and the representativeness heuristic will mislead, especially if it causes people to neglect base-rate information that points in another direction.
One sin of representativeness is an excessive willingness to predict the occurrence of unlikely (low base-rate) events.
When an incorrect intuitive judgment is made, System 1 and System 2 should both be indicted. System 1 suggested the incorrect intuition, and System 2 endorsed it and expressed it in a judgment.
You surely understand in principle that worthless information should not be treated differently from a complete lack of information, but WYSIATI makes it very difficult to apply that principle.
Your probability that it will rain tomorrow is your subjective degree of belief, but you should not let yourself believe whatever comes to your mind. To be useful, your beliefs should be constrained by the logic of probability. So if you believe that there is a 40% chance that it will rain sometime tomorrow, you must also believe that there is a 60% chance it will not rain tomorrow, and you must not believe that there is a 50% chance that it will rain tomorrow morning.
Reverend Thomas Bayes, who is credited with the first major contribution to a large problem: the logic of how people should change their mind in the light of evidence.
There are two ideas to keep in mind about Bayesian reasoning and how we tend to mess it up. The first is that base rates matter, even in the presence of evidence about the case at hand. This is often not intuitively obvious. The second is that intuitive impressions of the diagnosticity of evidence are often exaggerated.
The essential keys to disciplined Bayesian reasoning can be simply summarized: Anchor your judgment of the probability of an outcome on a plausible base rate. Question the diagnosticity of your evidence.
When you specify a possible event in greater detail you can only lower its probability. The problem therefore sets up a conflict between the intuition of representativeness and the logic of probability.
a conjunction fallacy, which people commit when they judge a conjunction of two events (here, bank teller and feminist) to be more probable than one of the events (bank teller) in a direct comparison.
The uncritical substitution of plausibility for probability has pernicious effects on judgments when scenarios are used as tools of forecasting.
This is a trap for forecasters and their clients: adding detail to scenarios makes them more persuasive, but less likely to come true.
Hsee called the resulting pattern less is more. By removing 16 items from Set A (7 of them intact), its value is improved.
From the perspective of economic theory, this result is troubling: the economic value of a dinnerware set or of a collection of baseball cards is a sum-like variable. Adding a positively valued item to the set can only increase its value.
This is also why, as in Hsee’s dinnerware study, single evaluations of the Linda problem produce a less-is-more pattern. System 1 averages instead of adding, so when the non-feminist bank tellers are removed from the set, subjective probability increases.
Linda was not the only conjunction error that survived joint evaluation. We found similar violations of logic in many other judgments.
Here again, the scenario that was judged more probable was unquestionably more plausible, a more coherent fit with all that was known about the best tennis player in the world.
many subsequent experiments have shown that the frequency representation, as it is known, makes it easy to appreciate that one group is wholly included in the other.
What have we learned from these studies about the workings of System 2? One conclusion, which is not new, is that System 2 is not impressively alert.
In all these cases, the conjunction appeared plausible, and that sufficed for an endorsement of System 2.
The laziness of System 2 is an important fact of life, and the observation that representativeness can block the application of an obvious logical rule is also of some interest.
This reasoning neglects the unique feature of the conjunction fallacy as a case of conflict between intuition and logic.
If you visit a courtroom you will observe that lawyers apply two styles of criticism: to demolish a case they raise doubts about the strongest arguments that favor it; to discredit a witness, they focus on the weakest part of the testimony.
The two versions of the problem are mathematically indistinguishable, but they are psychologically quite different.
In the first version, the base rate of Blue cabs is a statistical fact about the cabs in the city. A mind that is hungry for causal stories finds nothing to chew on: How does the number of Green and Blue cabs in the city cause this cab driver to hit and run?
The cab example illustrates two types of base rates. Statistical base rates are facts about a population to which a case belongs, but they are not relevant to the individual case. Causal base rates change your view of how the individual case came to be. The two types of base-rate information are treated differently:
Statistical base rates are generally underweighted,
Causal base rates are treated as information about the individual case and are easily combined with other case-specific information.
Stereotyping is a bad word in our culture, but in my usage it is neutral. One of the basic characteristics of System 1 is that it represents categories as norms and prototypical exemplars.
Some stereotypes are perniciously wrong, and hostile stereotyping can have dreadful consequences, but the psychological facts cannot be avoided: stereotypes, both correct and false, are how we think of categories.
In sensitive social contexts, we do not want to draw possibly erroneous conclusions about the individual from the statistics of the group.
We consider it morally desirable for base rates to be treated as statistical facts about the group rather than as presumptive facts about individuals. In other words, we reject causal base rates.
It is useful to remember, however, that neglecting valid stereotypes inevitably results in suboptimal judgments.
the simplistic idea that the resistance is costless is wrong.
Reliance on the affect heuristic is common in politically charged arguments. The positions we favor have no cost and those we oppose have no benefits. We should be able to do better.