More on this book
Community
Kindle Notes & Highlights
Lotteries have their place, and they need not be unjust. Acceptable lotteries are used to allocate “goods,” like courses in some universities, or “bads,” like the draft in the military. They serve a purpose. But the judgment lotteries we talk about allocate nothing. They just produce uncertainty. Imagine an insurance company whose underwriters are noiseless and set the optimal premium, but a chance device then intervenes to modify the quote that the client actually sees. Evidently, there would be no justification for such a lottery. Neither is there any justification for a system in which the
...more
System noise plagues many organizations: an assignment process that is effectively random often decides which doctor sees you in a hospital, which judge hears your case in a courtroom, which patent examiner reviews your application, which customer service representative hears your complaint, and so on. Unwanted variability in these judgments can cause serious problems, including a loss of money and rampant unfairness.
In noisy systems, errors do not cancel out. They add up.
Most of us, most of the time, live with the unquestioned belief that the world looks as it does because that’s the way it is.
We hold a single interpretation of the world around us at any one time, and we normally invest little effort in generating plausible alternatives to it.
In the case of professional judgments, the belief that others see the world much as we do is reinforced every day in multiple ways. First, we share with our colleagues a common language and set of rules about the considerations that should matter in our decisions. We also have the reassuring experience of agreeing with others on the absurdity of judgments that violate these rules. We view the occasional disagreements with colleagues as lapses of judgment on their part. We have little opportunity to notice that our agreed-on rules are vague, sufficient to eliminate some possibilities but not to
...more
Judgment can therefore be described as measurement in which the instrument is a human mind
Selective attention and selective recall are a source of variability across people.
To some degree, you might perhaps think harder about a problem whose answer will be revealed soon, because the fear of being exposed concentrates the mind.
What made you feel you got the judgment right, or at least right enough to be your answer? We suggest this feeling is an internal signal of judgment completion, unrelated to any outside information.
And decision makers who choose from several strategic options expect colleagues and observers who have the same information and share the same goals to agree with them, or at least not to disagree too much.
System noise is inconsistency, and inconsistency damages the credibility of the system.
The different errors add up; they do not cancel out.
in professional judgments of all kinds, whenever accuracy is the goal, bias and noise play the same role in the calculation of overall error
people’s intuitions in this regard are almost the mirror image of what they should be: people are very keen to get perfect hits and highly sensitive to small errors, but they hardly care at all about the difference between two large errors.
A widely accepted maxim of good decision making is that you should not mix your values and your facts.
Also, negotiators who shift from a good mood to an angry one during the negotiation often achieve good results—something to remember when you’re facing a stubborn counterpart!
People who are in a good mood are more likely to let their biases affect their thinking.
We have described these studies of mood in some detail because we need to emphasize an important truth: you are not the same person at all times.
When physicians are under time pressure, they are apparently more inclined to choose a quick-fix solution, despite its serious downsides. Other studies showed that, toward the end of the day, physicians are more likely to prescribe antibiotics and less likely to prescribe flu shots.
Bad weather is associated with improved memory; judicial sentences tend to be more severe when it is hot outside; and stock market performance is affected by sunshine. In some cases, the effect of the weather is less obvious.
Another source of random variability in judgment is the order in which cases are examined. When a person is considering a case, the decisions that immediately preceded it serve as an implicit frame of reference. Professionals who make a series of decisions in sequence, including judges, loan officers, and baseball umpires, lean toward restoring a form of balance: after a streak, or a series of decisions that go in the same direction, they are more likely to decide in the opposite direction than would be strictly justified. As a result, errors (and unfairness) are inevitable. Asylum judges in
...more
Who speaks first, who speaks last, who speaks with confidence, who is wearing black, who is seated next to whom, who smiles or frowns or gestures at the right moment—all these factors, and many more, affect outcomes.
were indeed wise as long as they registered their views independently. But if they learned the estimates of other people—for example, the average estimate of a group of twelve—the crowd did worse.
Recall the basic finding of group polarization: after people talk with one another, they typically end up at a more extreme point in line with their original inclinations. Our experiment illustrates this effect.
And if people care about their reputation within the group, they will shift in the direction of the dominant tendency, which will also produce polarization.
You will not be surprised by our conclusion that the professionals come third in this competition.
the illusion of validity.
evaluating cases on the evidence available and predicting actual outcomes.
In fact, many types of mechanical approaches, from almost laughably simple rules to the most sophisticated and impenetrable machine algorithms, can outperform human judgment. And one key reason for this outperformance—albeit not the only one—is that all mechanical approaches are noise-free.
The immediate implication of Dawes’s work deserves to be widely known: you can make valid statistical predictions without prior data about the outcome that you are trying to predict. All you need is a collection of predictors that you can trust to be correlated with the outcome.
Many experts ignore the clinical-versus-mechanical debate, preferring to trust their judgment. They have faith in their intuitions and doubt that machines could do better. They regard the idea of algorithmic decision making as dehumanizing and as an abdication of their responsibility.
The internal signal is a self-administered reward, one people work hard (or sometimes not so hard) to achieve when they reach closure on a judgment. It is a satisfying emotional experience, a pleasing sense of coherence, in which the evidence considered and the judgment reached feel right. All the pieces of the jigsaw puzzle seem to fit. (We will see later that this sense of coherence is often bolstered by hiding or ignoring pieces of evidence that don’t fit.)
This emotional experience (“the evidence feels right”) masquerades as rational confidence in the validity of one’s judgment (“I know, even if I don’t know why”).
“The average expert was roughly as accurate as a dart-throwing chimpanzee.”
People who believe themselves capable of an impossibly high level of predictive accuracy are not just overconfident. They don’t merely deny the risk of noise and bias in their judgments. Nor do they simply deem themselves superior to other mortals.
When they listen to their gut, decision makers hear the internal signal and feel the emotional reward it brings.
As long as algorithms are not nearly perfect—and, in many domains, objective ignorance dictates that they will never be—human judgment will not be replaced. That is why it must be improved.
Such low correlation coefficients may come as a surprise if you are used to reading about findings that are presented as “statistically significant” or even “highly significant.” Statistical terms are often misleading to the lay reader, and “significant” may be the worst example of this. When a finding is described as “significant,” we should not conclude that the effect it describes is a strong one. It simply means that the finding is unlikely to be the product of chance alone. With a sufficiently large sample, a correlation can be at once very “significant” and too small to be worth
...more
Relying on causal thinking about a single case is a source of predictable errors. Taking the statistical view, which we will also call the outside view, is a way to avoid these errors.
some of the operations of fast, System 1 thinking are responsible for many judgment errors. In chapter 13, we present three important judgment heuristics on which System 1 extensively relies. We show how these heuristics cause predictable, directional errors (statistical bias) as well as noise.
choice of an appropriate scale is a prerequisite for good judgment and that ill-defined or inadequate scales are an important source of noise.
For instance, when people forecast how long it will take them to complete a project, the mean of their estimates is usually much lower than the time they will actually need. This familiar psychological bias is known as the planning fallacy.
scope insensitivity.)
Both questions are examples of taking what we have called the outside view: when you take this view,
base-rate neglect.
This example illustrates a different type of bias, which we call conclusion bias, or prejudgment. Like Lucas, we often start the process of judgment with an inclination to reach a particular conclusion.
In general, we jump to conclusions, then stick to them.
when there are large individual differences in biases (different prejudgments) or when the effect of biases depends on context (different triggers), there will be noise.
For instance, we will ignore the base rate when we judge probability by similarity.”