More on this book
Community
Kindle Notes & Highlights
striking finding was that any linear model, when applied consistently to all cases, was likely to outdo human judges in predicting an outcome from the same information.
to put it bluntly, it proved almost impossible in that study to generate a simple model that did worse than the experts
human judges performed very poorly in absolute terms, which helps explain why even unimpressive linear models outdid them.
noise impairs clinical judgment.
human experts are easily outperformed by simple formulas
This finding argues in favor of using noise-free methods: rules and algorithms, which are the...
This highlight has been truncated due to consecutive passage length restrictions.
noise-free model of a judge achieves more accurate predictions than the actual judge does.”
By this definition, simple models and other forms of mechanical judgment we described in the previous chapter are algorithms,
that all mechanical approaches are noise-free.
instead of using multiple regression to determine the precise weight of each predictor, he proposed giving all the predictors equal weights.
equal-weight models are about as accurate as “proper” regression models, and far superior to clinical judgments.
multiple regression computes “optimal” weights that minimize squared errors.
But multiple regression minimizes error in the original data.
challenge is that when the formula is applied out of sample—
correct measure of a model’s predictive accuracy is its performance in a new sample, called its cross-validated correlation.
loss of accuracy in cross-validation
flukes loom larger in small samples.
The problem Dawes pointed out is that the samples used in social science research are generally so small that the advantage of so...
This highlight has been truncated due to consecutive passage length restrictions.
Dawes’s words, “we do not need models more precise than our measurements.” Equal-weight models do well because they are not susceptible to accidents of sampling.
“robust beauty” in equal weights.
“The whole trick is to decide what variables to look at and then to know
how to add.”
Another style of simplification is through frugal models, or simple rules.
some settings, they can produce surprisingly good predictions.
example illustrates a general rule: the combination of two or more correlated predictors is barely more predictive than the best of them on its own. Because, in real life, predictors are almost always correlated to one another,
this frugal model performed as well as statistical models that used a much larger number of variables.
In all these tasks, the frugal rule did as well as more complex regression models did (though generally not as well as machine learning did).
The appeal of frugal rules is that they are transparent and easy to apply. Moreover, these advantages are obtained at relatively little cost in accuracy relative to more complex models.
This, in essence, is the promise of AI.
large data sets make it possible to deal mechanically with broken-leg exceptions.
tells them when to override the model and when not to.
Since this personal pattern is highly likely to be invalid, you should refrain from overriding the model; your intervention is likely to make the prediction less accurate.
mere pattern finding.
probably take some time for an AI to understand why
In other words,
the machine-learning model performs much better than human judges do at predicting which defendants are high risks.
In other words, some patterns in the data, though rare, strongly predict
high risk.
this algorithm are in important respects less racially biased than those of the judges, not more.
Before drawing general conclusions about algorithms, however, we should remember that some algorithms are not only more accurate than human judges but also fairer.
all mechanical prediction techniques, not just the most recent and more sophisticated ones, represent significant improvements on human judgment.
Simple rules that are merely sensible typically do better than human judgment.
ability to exploit much more information.
people who work with them often trust their gut and insist that statistical analysis cannot possibly replace good judgment.
The authors concluded that the resistance of clinicians can be explained by a combination of sociopsychological factors, including their “fear of technological unemployment,” “poor education,” and a “general dislike of computers.”
people are not systematically suspicious of algorithms.
Resistance to algorithms, or algorithm aversion,
We expect machines to be perfect. If this expectation is violated, we discard them.
simplest rules and algorithms have big advantages over human judges: they are free of noise, and they do not attempt to apply complex, usually invalid insights about the predictors.”
“The algorithm makes mistakes, of course. But if human judges make even more mistakes, whom should we trust?”