Noise: A Flaw in Human Judgment
Rate it:
Open Preview
Read between July 27 - October 21, 2021
23%
Flag icon
the correlation between two variables is their percentage of shared determinants.
24%
Flag icon
The use of multiple regression is an example of mechanical prediction.
25%
Flag icon
Meehl discovered that clinicians and other professionals are distressingly weak in what they often see as their unique strength: the ability to integrate information.
25%
Flag icon
The illusion of validity is found wherever predictive judgments are made, because of a common failure to distinguish between two stages of the prediction task: evaluating cases on the evidence available and predicting actual outcomes.
25%
Flag icon
distinction between cases and predictions, you are in excellent company: Everybody finds that distinction confusing.
25%
Flag icon
A 2000 review of 136 studies confirmed unambiguously that mechanical aggregation outperforms clinical judgment.
25%
Flag icon
The findings support a blunt conclusion: simple models beat humans.
26%
Flag icon
a review of fifty years of research concluded that models of judges consistently outperformed the judges they modeled.
26%
Flag icon
Complexity and richness do not generally lead to more accurate predictions.
26%
Flag icon
complex rules will often give you only the illusion of validity and in fact harm the quality of your judgments.
26%
Flag icon
Reducing noise mechanically increases the validity of predictive judgment.
26%
Flag icon
In short, replacing you with a model of you does two things: it eliminates your subtlety, and it eliminates your pattern noise.
26%
Flag icon
even when the complex rules are valid in principle, they inevitably apply under conditions that are rarely observed.
26%
Flag icon
it proved almost impossible in that study to generate a simple model that did worse than the experts did.
26%
Flag icon
the fact that mechanical adherence to a simple rule (Yu and Kuncel call it “mindless consistency”) could significantly improve judgment in a difficult problem illustrates the massive effect of noise on the validity of clinical predictions.
27%
Flag icon
all mechanical approaches are noise-free.
27%
Flag icon
Equal-weight models do well because they are not susceptible to accidents of sampling.
27%
Flag icon
the combination of two or more correlated predictors is barely more predictive than the best of them on its own.
27%
Flag icon
Because, in real life, predictors are almost always correlated to one another, this statistical fact supports the use of frugal approaches to prediction, which use a small number of predictors.
28%
Flag icon
What AI does involves no magic and no understanding; it is mere pattern finding.
29%
Flag icon
all mechanical prediction techniques, not just the most recent and more sophisticated ones, represent significant improvements on human judgment.
29%
Flag icon
The combination of personal patterns and occasion noise weighs so heavily on the quality of human judgment that simplicity and noiselessness are sizable advantages.
29%
Flag icon
Many experts ignore the clinical-versus-mechanical debate, preferring to trust their judgment.
29%
Flag icon
One key insight has emerged from recent research: people are not systematically suspicious of algorithms. When given a choice between taking advice from a human and an algorithm, for instance, they often prefer the algorithm.
29%
Flag icon
We expect machines to be perfect. If this expectation is violated, we discard them.
30%
Flag icon
This emotional experience (“the evidence feels right”) masquerades as rational confidence in the validity of one’s judgment (“I know, even if I don’t know why”).
30%
Flag icon
Confidence is no guarantee of accuracy,
30%
Flag icon
These unknowns are not problems of bias or noise in your judgment; they are objective characteristics of the task.
30%
Flag icon
Overconfidence is one of the best-documented cognitive biases.
31%
Flag icon
The limit on expert political judgment is set not by the cognitive limitation of forecasters but by their intractable objective ignorance of the future.
31%
Flag icon
They do, however, deserve some criticism for attempting an impossible task and for believing they can succeed in it.
31%
Flag icon
There is essentially no evidence of situations in which people do very poorly and models do very well with the same information.
31%
Flag icon
people often mistake their subjective sense of confidence for an indication of predictive validity.
31%
Flag icon
The denial of ignorance
32%
Flag icon
The same executives routinely make significant changes in their ways of working to capture gains that are not nearly as large.
32%
Flag icon
Despite all the evidence in favor of mechanical and algorithmic prediction methods, and despite the rational calculus that clearly shows the value of incremental improvements in predictive accuracy, many decision makers will reject decision-making approaches that deprive them of the ability to exercise their intuition. As long as algorithms are not nearly perfect—and, in many domains, objective ignorance dictates that they will never be—human judgment will not be replaced. That is why it must be improved.
33%
Flag icon
“significant,” we should not conclude that the effect it describes is a strong one. It simply means that the finding is unlikely to be the product of chance alone.
33%
Flag icon
To understand is to describe a causal chain.
33%
Flag icon
The ability to make a prediction is a measure of whether such a causal chain has indeed been identified. And correlation, the measure of predictive accuracy, is a measure of how much causation we can explain.
33%
Flag icon
correlation does not imply causation, causation does imply correlation.
33%
Flag icon
why do professionals—and why do we all—seem to underestimate our objective ignorance of the world?
33%
Flag icon
Causal thinking creates stories in which specific events, people, and objects affect one another.
33%
Flag icon
Whatever the outcome (eviction or not), once it has happened, causal thinking makes it feel entirely explainable, indeed predictable.
33%
Flag icon
In the valley of the normal, events unfold just like the Joneses’ eviction: they appear normal in hindsight, although they were not expected, and although we could not have predicted them.
33%
Flag icon
the process of understanding reality is backward-looking.
34%
Flag icon
Because the event explains itself as it occurs, we are under the illusion that it could have been anticipated.
34%
Flag icon
theme of this book.
34%
Flag icon
Relying on causal thinking about a single case is a source of predictable errors. Taking the statistical view, which we will also call the outside view, is a way to avoid these errors.
34%
Flag icon
Causal thinking helps us make sense of a world that is far less predictable than we think. It also explains why we view the world as far more predictable than it really is.
35%
Flag icon
central idea of the program was that people who are asked a difficult question use simplifying operations, called heuristics.