More on this book
Community
Kindle Notes & Highlights
“There is so much noise in judgment that a noise-free model of a judge achieves more accurate predictions than the actual judge does.”
And one key reason for this outperformance—albeit not the only one—is that all mechanical approaches are noise-free.
What makes the internal signal important—and misleading—is that it is construed not as a feeling but as a belief.
This emotional experience (“the evidence feels right”) masquerades as rational confidence in the validity of one’s judgment (“I know, even if I don’t know why”).
A person who starts a new job today will encounter many challenges and opportunities, and chance will intervene to change the direction of her life in many ways.
None of these events and circumstances can be predicted today—not by you, not by anyone else, and not by the best predictive model in the world. This intractable uncertainty includes everything that cannot be known at this time about the outcome that you are trying to predict.
Both intractable uncertainty (what cannot possibly be known) and imperfect information (what could be known but isn’t) make perfect prediction impossible.
Pundits blessed with clear theories about how the world works were the most confident and the least accurate.
The team discovered that short-term forecasting is difficult but not impossible, and that some people, whom Tetlock and Mellers called superforecasters, are consistently better at it than most others, including professionals in the intelligence community.
“Wherever there is prediction, there is ignorance, and probably more of it than we think. Have we checked whether the experts we trust are more accurate than dart-throwing chimpanzees?”
“When you trust your gut because of an internal signal, not because of anything you really know, you are in denial of your objective ignorance.”
An extensive review of research in social psychology, covering 25,000 studies and involving 8 million subjects over one hundred years, concluded that “social psychological effects typically yield a value of r [correlation coefficient] equal to .21.”
In short, wherever there is causality, there is correlation.
A different mode of thinking, which comes more naturally to our minds, will be called here causal thinking.
Causal thinking creates stories in which specific events, people, and objects affect one another.
As it happens, Jessica Jones, the family’s breadwinner, was laid off a few months ago. She could not find another job, and since then, she has been unable to pay the rent in full. She made partial payments, pleaded with the building manager several times, and even asked you to intervene (you did, but he remained unmoved).
When we give in to this feeling of inevitability, we lose sight of how easily things could have been different—how, at each fork in the road, fate could have taken a different path.
But most human experience falls between these two extremes. We are sometimes in a state in which we actively expect a specific event, and we are sometimes surprised. But most things take place in the broad valley of the normal, where events are neither entirely expected nor especially surprising.
the process of understanding reality is backward-looking.
This is what we mean by understanding a story, and this is what makes reality appear predictable—in hindsight. Because the event explains itself as it occurs, we are under the illusion that it could have been anticipated.
When the search for an obvious cause fails, our first resort is to produce an explanation by filling a blank in our model of the world.
Causal thinking avoids unnecessary effort while retaining the vigilance needed to detect abnormal events.
Beyond an elementary level, statistical thinking also demands specialized training.
For instance, when people forecast how long it will take them to complete a project, the mean of their estimates is usually much lower than the time they will actually need. This familiar psychological bias is known as the planning fallacy.
Did it occur to you to consider the probability that a randomly chosen CEO will still hold the same job two years later? Probably not.
You can think of this base-rate information as a measure of the difficulty of surviving as a CEO.
Substituting a judgment of how easily examples come to mind for an assessment of frequency is known as the availability heuristic.
In the same manner, substituting similarity for probability leads to neglect of base rates, which are quite properly irrelevant when judging similarity.
This example illustrates a different type of bias, which we call conclusion bias, or prejudgment.
When we do that, we let our fast, intuitive System 1 thinking suggest a conclusion. Either we jump to that conclusion and simply bypass the process of gathering and integrating information, or we mobilize System 2 thinking—engaging in deliberate thought—to come up with arguments that support our prejudgment.
Intelligent, Persistent, Cunning, Unprincipled. Your evaluation is no longer favorable, but it did not change enough. For comparison, consider the following description, which another shuffling of the deck could have produced: Unprincipled, Cunning, Persistent, Intelligent.
This experiment illustrates excessive coherence: we form coherent impressions quickly and are slow to change them.
(Another term to describe this phenomenon is the halo effect, because the candidate was evaluated in the positive “halo” of the first impression. We will see in chapter 24 that the halo effect is a serious problem in hiring decisions.)
The same psychological bias creates variable judgments and between-person noise.
In short, psychological biases, as a mechanism, are universal, and they often produce shared errors. But when there are large individual differences in biases (different prejudgments) or when the effect of biases depends on context (different triggers), there will be noise.
Speaking of Heuristics, Biases, and Noise “We know we have psychological biases, but we should resist the urge to blame every error on unspecified ‘biases.’” “When we substitute an easier question for the one we should be answering, errors are bound to occur. For instance, we will ignore the base rate when we judge probability by similarity.” “Prejudgments and other conclusion biases lead people to distort evidence in favor of their initial position.” “We form impressions quickly and hold on to them even when contradictory information comes in. This tendency is called excessive coherence.”
...more
(The technical term for such prediction errors is that they are nonregressive, because they fail to take into account a statistical phenomenon called regression to the mean.)
“It is hard to remain consistent when grading these essays. Should you try ranking them instead?”
Our hypothesis, however, was that the question that philosophers find difficult is quite easy for ordinary people, who simplify the task by substituting an easy question for the hard one.
The assumption implicit in the law is that jurors’ sense of justice will lead them directly from a consideration of an offense to the correct punishment.
This assumption is psychological nonsense—it assumes an ability that humans do not have.
“Can we agree on an anchor case that will serve as a reference point on the scale?”
Multiple, conflicting cues create the ambiguity that defines difficult judgment problems.
“The uniqueness of people’s personalities is what makes them capable of innovation and creativity, and simply interesting and exciting to be around. When it comes to judgment, however, that uniqueness is not an asset.”
error into bias and system noise, • system noise into level noise and pattern noise, • pattern noise into stable pattern noise and occasion noise.
Universities, for instance, address this problem when they require professors to abide by a predetermined distribution of grades within each class.
A substantial body of research in psychology and behavioral economics has documented a long list of psychological biases: the planning fallacy, overconfidence, loss aversion, the endowment effect, the status quo bias, excessive discounting of the future (“present bias”), and many others—including, of course, biases for or against various categories of people.
The invisibility of noise is a direct consequence of causal thinking. Noise is inherently statistical: it becomes visible only when we think statistically about an ensemble of similar judgments.
Unfortunately, taking the statistical view is not easy. We effortlessly invoke causes for the events we observe, but thinking statistically about them must be learned and remains effortful. Causes are natural; statistics are difficult.
Three things matter. Judgments are both less noisy and less biased when those who make them are well trained, are more intelligent, and have the right cognitive style. In other words: good judgments depend on what you know, how well you think, and how you think. Good judges tend to be experienced and smart, but they also tend to be actively open-minded and willing to learn from new information.