More on this book
Community
Kindle Notes & Highlights
To use Dawes’s phrase, which has become a meme among students of judgment, there is a “robust beauty” in equal weights.
pithy
the combination of two or more correlated predictors is barely more predictive than the best of them on its own. Because, in real life, predictors are almost always correlated to one another, this statistical fact supports the use of frugal approaches to prediction, which use a small number of predictors.
The appeal of frugal rules is that they are transparent and easy to apply. Moreover, these advantages are obtained at relatively little cost in accuracy relative to more complex models.
let us now travel in the opposite direction on the spectrum of sophistication. What if we could use many more predictors, gather much more data about each of them, spot relationship patterns that no human could detect, and model these patterns to achieve better prediction? This, in essence, is the promise of AI.
broken-leg exceptions.
Regardless of your confidence in the model, if you happen to know that a particular person just broke a leg, you probably know better than the model what their evening will look like.
When using simple models, the broken-leg principle holds an important lesson for decision makers: it tells them when to override the model and when not to.
One of the reasons for the success of machine-learning models in prediction tasks is that they are capable of discovering such broken legs—many more than humans can think of.
Improving predictions of rare events in this way reduces the need for human supervision.
What AI does involves no magic and no understanding; it is mere pattern finding.
the machine-learning model performs much better than human judges do at predicting which defendants are high risks.
The model built by machine learning was also far more successful than linear models that used the same information. The reason is intriguing: “The machine-learning algorithm finds significant signal in combinations of variables that might otherwise be missed.”
algorithms can simultaneously increase accuracy and reduce discrimination.
the growing concern about bias in algorithmic decision making.
Simple rules that are merely sensible typically do better than human judgment.
all the spirited talk about algorithms and machine learning, and despite important exceptions in particular fields, their use remains limited. Many experts ignore the clinical-versus-mechanical debate, preferring to trust their judgment. They have faith in their intuitions and doubt that machines could do better. They regard the idea of algorithmic decision making as dehumanizing and as an abdication of their responsibility.
Even today, coaches, managers, and people who work with them often trust their gut and insist that statistical analysis cannot possibly replace good judgment.
In a 1996 article, Meehl and a coauthor listed (and rebutted) no fewer than seventeen types of objections that psychiatrists, physicians, judges, and other professionals had to mechanical judgment. The authors concluded that the resistance of clinicians can be explained by a combination of sociopsychological factors, including their “fear of technological unemployment,” “poor education,” and a “general dislike of computers.” Since then, researchers have identified additional factors that contribute to this resistance.
Our goal in this book is to offer suggestions for the improvement of human judgment, not to argue for the “disp...
This highlight has been truncated due to consecutive passage length restrictions.
As humans, we are keenly aware that we make mistakes, but that is a privilege we are not prepared to share. We expect machines to be perfect. If this expectation is violated, we discard them.
This attitude is deeply rooted and unlikely to change until near-perfect predictive accuracy can be achieved.
“When there is a lot of data, machine-learning algorithms will do better than humans and better than simple models. But even the simplest rules and algorithms have big advantages over human judges: they are free of noise, and they do not attempt to apply complex, usually invalid insights about the predictors.”
“Since we lack data about the outcome we must predict, why don’t we use an equal-weight model? It will do almost as well as a proper model, and will surely do better than case-by-case human judgment.”
“The algorithm makes mistakes, of course. But if human judges make even more mistakes, whom should we trust?”
Some of the executives in our audiences tell us proudly that they trust their gut more than any amount of analysis.
what, exactly, do these people, who are blessed with the combination of authority and great self-confidence, hear from their gut?
intuition
“a judgment for a given course of action that comes to mind with an aura or conviction of rightness or plausibility, but without clearly articulated reasons or justifications—essentially ‘knowing’ but without knowing why.” We propose that this sense of knowing with...
This highlight has been truncated due to consecutive passage length restrictions.
The internal signal is a self-administered reward, one people work hard (or sometimes not so hard) to achieve when they reach closure on a judgment. It is a satisfying emotional experience, a pleasing sense of coherence, in which the evidence considered and the judgme...
This highlight has been truncated due to consecutive passage length restrictions.
What makes the internal signal important—and misleading—is that it is construed not as a feeling but as a belief.
Confidence is no guarantee of accuracy, however, and many confident predictions turn out to be wrong.
This intractable uncertainty includes everything that cannot be known at this time about the outcome that you are trying to predict.
principle knowable but is not known
Both intractable uncertainty (what cannot possibly be known) and imperfect information (what could be known but isn’t) make perfect prediction impossible.
We take a terminological liberty here, replacing the commonly used uncertainty with ignorance.
people who engage in predictive tasks will underestimate their objective ignorance. Overconfidence is one of the best-documented cognitive biases.
Philip Tetlock, is armed with a fierce commitment to truth and a mischievous sense of humor. In 2005, he published a book titled Expert Political Judgment.
“The average expert was roughly as accurate as a dart-throwing chimpanzee.”
pundits should not be blamed for the failures of their distant predictions. They do, however, deserve some criticism for attempting an impossible task and for believing they can succeed in it.
objective ignorance increases as we look further into the future.
Models are consistently better than people, but not much better.
asserting that the future is unpredictable is hardly a conceptual breakthrough. However, the obviousness of this fact is matched only by the regularity with which it is ignored, as the consistent findings about predictive overconfidence demonstrate.
People who believe themselves capable of an impossibly high level of predictive accuracy are not just overconfident. They don’t merely deny the risk of noise and bias in their judgments. Nor do they simply deem themselves superior to other mortals. They also believe in the predictability of events that are in fact unpredictable, implicitly denying the reality of uncertainty.
denial of ignorance.
When they listen to their gut, decision makers hear the internal signal and feel the emotional reward it brings.
leaders say they are especially likely to resort to intuitive decision making in situations that they perceive as highly uncertain. When the facts deny them the sense of understanding and confidence they crave, they turn to their intuition to provide it. The denial of ignorance is all the more tempting when ignorance is vast.
many leaders draw a seemingly paradoxical conclusion. Their gut-based decisions may not be perfect, they argue, but if the more systematic alternatives are also far from perfect, they are not worth adopting.