More on this book
Community
Kindle Notes & Highlights
the correlation between two variables is their percentage of shared determinants.
The use of multiple regression is an example of mechanical prediction.
Meehl discovered that clinicians and other professionals are distressingly weak in what they often see as their unique strength: the ability to integrate information.
The illusion of validity is found wherever predictive judgments are made, because of a common failure to distinguish between two stages of the prediction task: evaluating cases on the evidence available and predicting actual outcomes.
distinction between cases and predictions, you are in excellent company: Everybody finds that distinction confusing.
A 2000 review of 136 studies confirmed unambiguously that mechanical aggregation outperforms clinical judgment.
The findings support a blunt conclusion: simple models beat humans.
a review of fifty years of research concluded that models of judges consistently outperformed the judges they modeled.
Complexity and richness do not generally lead to more accurate predictions.
complex rules will often give you only the illusion of validity and in fact harm the quality of your judgments.
Reducing noise mechanically increases the validity of predictive judgment.
In short, replacing you with a model of you does two things: it eliminates your subtlety, and it eliminates your pattern noise.
even when the complex rules are valid in principle, they inevitably apply under conditions that are rarely observed.
it proved almost impossible in that study to generate a simple model that did worse than the experts did.
the fact that mechanical adherence to a simple rule (Yu and Kuncel call it “mindless consistency”) could significantly improve judgment in a difficult problem illustrates the massive effect of noise on the validity of clinical predictions.
all mechanical approaches are noise-free.
Equal-weight models do well because they are not susceptible to accidents of sampling.
the combination of two or more correlated predictors is barely more predictive than the best of them on its own.
Because, in real life, predictors are almost always correlated to one another, this statistical fact supports the use of frugal approaches to prediction, which use a small number of predictors.
What AI does involves no magic and no understanding; it is mere pattern finding.
all mechanical prediction techniques, not just the most recent and more sophisticated ones, represent significant improvements on human judgment.
The combination of personal patterns and occasion noise weighs so heavily on the quality of human judgment that simplicity and noiselessness are sizable advantages.
Many experts ignore the clinical-versus-mechanical debate, preferring to trust their judgment.
One key insight has emerged from recent research: people are not systematically suspicious of algorithms. When given a choice between taking advice from a human and an algorithm, for instance, they often prefer the algorithm.
We expect machines to be perfect. If this expectation is violated, we discard them.
This emotional experience (“the evidence feels right”) masquerades as rational confidence in the validity of one’s judgment (“I know, even if I don’t know why”).
Confidence is no guarantee of accuracy,
These unknowns are not problems of bias or noise in your judgment; they are objective characteristics of the task.
Overconfidence is one of the best-documented cognitive biases.
The limit on expert political judgment is set not by the cognitive limitation of forecasters but by their intractable objective ignorance of the future.
They do, however, deserve some criticism for attempting an impossible task and for believing they can succeed in it.
There is essentially no evidence of situations in which people do very poorly and models do very well with the same information.
people often mistake their subjective sense of confidence for an indication of predictive validity.
The denial of ignorance
The same executives routinely make significant changes in their ways of working to capture gains that are not nearly as large.
Despite all the evidence in favor of mechanical and algorithmic prediction methods, and despite the rational calculus that clearly shows the value of incremental improvements in predictive accuracy, many decision makers will reject decision-making approaches that deprive them of the ability to exercise their intuition. As long as algorithms are not nearly perfect—and, in many domains, objective ignorance dictates that they will never be—human judgment will not be replaced. That is why it must be improved.
“significant,” we should not conclude that the effect it describes is a strong one. It simply means that the finding is unlikely to be the product of chance alone.
To understand is to describe a causal chain.
The ability to make a prediction is a measure of whether such a causal chain has indeed been identified. And correlation, the measure of predictive accuracy, is a measure of how much causation we can explain.
correlation does not imply causation, causation does imply correlation.
why do professionals—and why do we all—seem to underestimate our objective ignorance of the world?
Causal thinking creates stories in which specific events, people, and objects affect one another.
Whatever the outcome (eviction or not), once it has happened, causal thinking makes it feel entirely explainable, indeed predictable.
In the valley of the normal, events unfold just like the Joneses’ eviction: they appear normal in hindsight, although they were not expected, and although we could not have predicted them.
the process of understanding reality is backward-looking.
Because the event explains itself as it occurs, we are under the illusion that it could have been anticipated.
theme of this book.
Relying on causal thinking about a single case is a source of predictable errors. Taking the statistical view, which we will also call the outside view, is a way to avoid these errors.
Causal thinking helps us make sense of a world that is far less predictable than we think. It also explains why we view the world as far more predictable than it really is.
central idea of the program was that people who are asked a difficult question use simplifying operations, called heuristics.