Superforecasting: The Art and Science of Prediction
Rate it:
Open Preview
48%
Flag icon
When a block is at the very base of the tower, there’s no way to remove it without bringing everything crashing down. This extreme commitment leads to extreme reluctance to admit error,
49%
Flag icon
This suggests that superforecasters may have a surprising advantage: they’re not experts or professionals, so they have little ego invested in each forecast.
49%
Flag icon
that helps them avoid underreaction when new evidence calls for updating beliefs.
49%
Flag icon
those who got the irrelevant information lost confidence. Why? With nothing to go on but evidence that fits their stereotype of a good student or a child abuser, the signal feels strong and clear—and our judgment reflects that. But add irrelevant information and we can’t help but see Robert or David more as a person than a stereotype, which weakens the fit.10 Psychologists call this the dilution effect,
49%
Flag icon
Many studies have found that those who trade more frequently get worse returns than those who lean toward old-fashioned buy-and-hold strategies.
49%
Flag icon
Traders who constantly buy and sell are not cognitively or emotionally connected to their stocks.
49%
Flag icon
Given superforecasters’ modest commitment to their forecasts, we would expect overreactions—
49%
Flag icon
to be a greater risk than underreactions. And yet, superforecasters often manage to avoid both errors.
50%
Flag icon
For his initial forecasts, Tim takes less time than some other top forecasters.
50%
Flag icon
But the next day, he’ll come back, take another look, and form a second opinion. He also trawls for contrary evidence on the Internet. And he does this five days a week. All that exploration makes him change his mind a lot.
50%
Flag icon
I haven’t yet mentioned the magnitude of his constant course corrections. In almost every case they are small. And that makes a big difference.
50%
Flag icon
The tournament data prove it: superforecasters not only update more often than other forecasters, they update in smaller increments.
50%
Flag icon
Imagine you are sitting with your back to a billiards table.
50%
Flag icon
You ask, “Is the second ball to the left or the right of the first?” Your friend says, “To the left.” That’s an almost trivial scrap of information. But it’s not nothing.
50%
Flag icon
Keep repeating the process and you slowly narrow the range of the possible locations, zeroing in on the truth—although you will never eliminate uncertainty entirely.
51%
Flag icon
If you’ve taken Statistics 101, you may recall a version of this thought experiment was dreamt up by Thomas Bayes.
51%
Flag icon
the theorem says that your new belief should depend on two things—your prior belief (and all the knowledge that informed it) multiplied by the “diagnostic value” of the new information.
51%
Flag icon
“Bayes’ theorem requires us to estimate two things: 1) how likely are we to see a poor Senate performance when the nominee is destined to fail and 2) how likely are we to see a poor performance when the nominee is bound for approval?”
51%
Flag icon
What matters far more to the superforecasters than Bayes’ theorem is Bayes’ core insight of gradually getting closer to the truth by constantly updating in proportion to the weight of the evidence.
51%
Flag icon
Superforecasters understand the principles but also know that their application requires nuanced judgments. And they would rather break the rules than make a barbarous forecast.
52%
Flag icon
To be a top-flight forecaster, a growth mindset is essential.
53%
Flag icon
The one consistent belief of the “consistently inconsistent” John Maynard Keynes was that he could do better.
53%
Flag icon
The knowledge required to ride a bicycle can’t be fully captured in words and conveyed to others. We need “tacit knowledge,” the sort we only get from bruising experience.
53%
Flag icon
learning to forecast requires trying to forecast. Reading books on forecasting is no substitute for the experience of the real thing.
54%
Flag icon
police officers spend a lot of time figuring out who is telling the truth and who is lying, but research has found they aren’t nearly as good at it as they think they are and they tend not to get better with experience. That’s because experience isn’t enough. It must be accompanied by clear feedback.
54%
Flag icon
As a result, officers grow confident faster than they grow accurate, meaning they grow increasingly overconfident.
54%
Flag icon
Unfortunately, most forecasters do not get the high-quality feedback that helps meteorologists and bridge players improve. There are two main reasons why.
54%
Flag icon
When a forecaster says something could or might or may happen, she could or might or may be saying almost anything.
54%
Flag icon
Even an impartial observer would struggle to extract meaningful feedback from vague forecasts, but often the judge is the forecaster herself. That makes the problem even worse.
54%
Flag icon
The second big barrier to feedback is time lag. When forecasts span months or years, the wait for a result allows the flaws of memory to creep in.
54%
Flag icon
Once we know the outcome of something, that knowledge skews our perception of what we thought before we knew the outcome: that’s hindsight bias.
55%
Flag icon
Forecasters who use ambiguous language and rely on flawed memories to retrieve old forecasts don’t get clear feedback, which makes it impossible to learn from experience.
55%
Flag icon
research shows that judgment calibrated in one context transfers poorly, if at all, to another.
55%
Flag icon
To get better at a certain type of forecasting, that is the type of forecasting you must do—over and over again,
55%
Flag icon
Often, postmortems are as careful and self-critical as the thinking that goes into making the initial forecast.
55%
Flag icon
People often assume that when a decision is followed by a good outcome, the decision was good, which isn’t always true, and can be dangerous if it blinds us to the flaws in our thinking.
56%
Flag icon
Grit is passionate perseverance of long-term goals, even in the face of frustration and failure. Married with a growth mindset, it is a potent force for personal progress.
57%
Flag icon
Computer programmers have a wonderful term for a program that is not intended to be released in a final version but will instead be used, analyzed, and improved without end. It is “perpetual beta.” Superforecasters are perpetual beta.
58%
Flag icon
If the Bay of Pigs was the Kennedy administration’s nadir, the Cuban missile crisis was its zenith, a moment when Kennedy and his team creatively engineered a positive result under extreme pressure.
58%
Flag icon
The cast of characters in both dramas is mostly the same: the team that bungled the Bay of Pigs was the team that performed brilliantly during the Cuban missile crisis.
58%
Flag icon
Today, everyone has heard of groupthink,
58%
Flag icon
Groups that get along too well don’t question assumptions or confront uncomfortable facts. So everyone agrees, which is pleasant, and the fact that everyone agrees is tacitly taken to be proof the group is on the right track.
58%
Flag icon
After the fiasco, Kennedy ordered an inquiry to figure out how his people could have botched it so badly. It identified cozy unanimity as the key problem and recommended changes to the decision-making process to ensure it could never develop again. Skepticism was the new watchword.
59%
Flag icon
Groups can be wise, or mad, or both. What makes the difference isn’t just who is in the group, Kennedy’s circle of advisers demonstrated. The group is its own animal.
59%
Flag icon
When people gather and discuss in a group, independence of thought and expression can be lost.
59%
Flag icon
a group can get people to abandon independent judgment and buy into errors. When that happens, the mistakes will pile up, not cancel out.
59%
Flag icon
If forecasters can keep questioning themselves and their teammates, and welcome vigorous debate, the group can become more than the sum of its parts.
60%
Flag icon
At the end of the year, the results were unequivocal: on average, teams were 23% more accurate than individuals.
60%
Flag icon
success can lead to acclaim that can undermine the habits of mind that produced the success. Such hubris often afflicts highly accomplished individuals. In business circles, it is called CEO disease.
60%
Flag icon
Research on teams often assumes they have leaders and norms and focuses on ensuring these don’t hinder performance.