Noise: A Flaw in Human Judgment
Rate it:
Open Preview
Read between July 31 - August 28, 2024
26%
Flag icon
striking finding was that any linear model, when applied consistently to all cases, was likely to outdo human judges in predicting an outcome from the same information.
26%
Flag icon
to put it bluntly, it proved almost impossible in that study to generate a simple model that did worse than the experts
26%
Flag icon
human judges performed very poorly in absolute terms, which helps explain why even unimpressive linear models outdid them.
26%
Flag icon
noise impairs clinical judgment.
26%
Flag icon
human experts are easily outperformed by simple formulas
26%
Flag icon
This finding argues in favor of using noise-free methods: rules and algorithms, which are the...
This highlight has been truncated due to consecutive passage length restrictions.
26%
Flag icon
noise-free model of a judge achieves more accurate predictions than the actual judge does.”
26%
Flag icon
By this definition, simple models and other forms of mechanical judgment we described in the previous chapter are algorithms,
26%
Flag icon
that all mechanical approaches are noise-free.
27%
Flag icon
instead of using multiple regression to determine the precise weight of each predictor, he proposed giving all the predictors equal weights.
27%
Flag icon
equal-weight models are about as accurate as “proper” regression models, and far superior to clinical judgments.
27%
Flag icon
multiple regression computes “optimal” weights that minimize squared errors.
27%
Flag icon
But multiple regression minimizes error in the original data.
27%
Flag icon
challenge is that when the formula is applied out of sample—
27%
Flag icon
correct measure of a model’s predictive accuracy is its performance in a new sample, called its cross-validated correlation.
27%
Flag icon
loss of accuracy in cross-validation
27%
Flag icon
flukes loom larger in small samples.
27%
Flag icon
The problem Dawes pointed out is that the samples used in social science research are generally so small that the advantage of so...
This highlight has been truncated due to consecutive passage length restrictions.
27%
Flag icon
Dawes’s words, “we do not need models more precise than our measurements.” Equal-weight models do well because they are not susceptible to accidents of sampling.
27%
Flag icon
“robust beauty” in equal weights.
27%
Flag icon
“The whole trick is to decide what variables to look at and then to know
27%
Flag icon
how to add.”
27%
Flag icon
Another style of simplification is through frugal models, or simple rules.
27%
Flag icon
some settings, they can produce surprisingly good predictions.
27%
Flag icon
example illustrates a general rule: the combination of two or more correlated predictors is barely more predictive than the best of them on its own. Because, in real life, predictors are almost always correlated to one another,
27%
Flag icon
this frugal model performed as well as statistical models that used a much larger number of variables.
27%
Flag icon
In all these tasks, the frugal rule did as well as more complex regression models did (though generally not as well as machine learning did).
28%
Flag icon
The appeal of frugal rules is that they are transparent and easy to apply. Moreover, these advantages are obtained at relatively little cost in accuracy relative to more complex models.
28%
Flag icon
This, in essence, is the promise of AI.
28%
Flag icon
large data sets make it possible to deal mechanically with broken-leg exceptions.
28%
Flag icon
tells them when to override the model and when not to.
28%
Flag icon
Since this personal pattern is highly likely to be invalid, you should refrain from overriding the model; your intervention is likely to make the prediction less accurate.
28%
Flag icon
mere pattern finding.
28%
Flag icon
probably take some time for an AI to understand why
28%
Flag icon
In other words,
28%
Flag icon
the machine-learning model performs much better than human judges do at predicting which defendants are high risks.
28%
Flag icon
In other words, some patterns in the data, though rare, strongly predict
28%
Flag icon
high risk.
28%
Flag icon
this algorithm are in important respects less racially biased than those of the judges, not more.
29%
Flag icon
Before drawing general conclusions about algorithms, however, we should remember that some algorithms are not only more accurate than human judges but also fairer.
29%
Flag icon
all mechanical prediction techniques, not just the most recent and more sophisticated ones, represent significant improvements on human judgment.
29%
Flag icon
Simple rules that are merely sensible typically do better than human judgment.
29%
Flag icon
ability to exploit much more information.
29%
Flag icon
people who work with them often trust their gut and insist that statistical analysis cannot possibly replace good judgment.
29%
Flag icon
The authors concluded that the resistance of clinicians can be explained by a combination of sociopsychological factors, including their “fear of technological unemployment,” “poor education,” and a “general dislike of computers.”
29%
Flag icon
people are not systematically suspicious of algorithms.
29%
Flag icon
Resistance to algorithms, or algorithm aversion,
29%
Flag icon
We expect machines to be perfect. If this expectation is violated, we discard them.
29%
Flag icon
simplest rules and algorithms have big advantages over human judges: they are free of noise, and they do not attempt to apply complex, usually invalid insights about the predictors.”
29%
Flag icon
“The algorithm makes mistakes, of course. But if human judges make even more mistakes, whom should we trust?”
1 5 26