Superforecasting: The Art and Science of Prediction
Rate it:
Open Preview
12%
Flag icon
It’s a rare day when a journalist says, “The market rose today for any one of a hundred different reasons, or a mix of them, so no one knows.” Instead, like a split-brain patient asked why he is pointing at a picture of a shovel when he has no idea why, the journalist conjures a plausible story from whatever is at hand.
12%
Flag icon
The problem is that we move too fast from confusion and uncertainty (“I have no idea why my hand is pointed at a picture of a shovel”) to a clear and confident conclusion (“Oh, that’s simple”) without spending any time in between (“This is one possible explanation but there are others”).
12%
Flag icon
The interplay between System 1 and System 2 can be subtle and creative. But scientists are trained to be cautious. They know that no matter how tempting it is to anoint a pet hypothesis as The Truth, alternative explanations must get a hearing.
12%
Flag icon
Scientists must be able to answer the question “What would convince me I am wrong?” If they can’t, it’s a sign they have grown too attached to their beliefs.
13%
Flag icon
our natural inclination is to grab on to the first plausible explanation and happily gather supportive evidence without checking its reliability. That is what psychologists call confirmation bias.
13%
Flag icon
This is a poor way to build an accurate mental model of a complicated world, but it’s a superb way to satisfy the brain’s desire for order because it yields tidy explanations with no loose ends.
13%
Flag icon
Formally, it’s called attribute substitution, but I call it bait and switch: when faced with a hard question, we often surreptitiously replace it with an easy one.
13%
Flag icon
blink-think is another false dichotomy. The choice isn’t either/or, it is how to blend them in evolving situations.
14%
Flag icon
Drawing such seemingly different conclusions about snap judgments, Kahneman and Klein could have hunkered down and fired off rival polemics. But, like good scientists, they got together to solve the puzzle.
14%
Flag icon
With training or experience, people can encode patterns deep in their memories in vast number and intricate detail—such as the estimated fifty thousand to one hundred thousand chess positions that top players have in their repertoire.
14%
Flag icon
Whether intuition generates delusion or insight depends on whether you work in a world full of valid cues you can unconsciously register for future use.
14%
Flag icon
Learning the cues is a matter of opportunity and effort. Sometimes learning the cues is easy.
14%
Flag icon
“Without those opportunities to learn, a valid intuition can only be due to a lucky accident or to magic,” Kahneman and Klein conclude, “and we do not believe in magic.”
14%
Flag icon
Carlsen respects his intuition, as well he should, but he also does a lot of “double-checking” because he knows that sometimes intuition can let him down and conscious thought can improve his judgment.
15%
Flag icon
The first step in learning what works in forecasting, and what doesn’t, is to judge forecasts, and to do that we can’t make assumptions about what the forecast means.
15%
Flag icon
It is far from unusual that a forecast that at first looks as clear as a freshly washed window proves too opaque to be conclusively judged right or wrong.
16%
Flag icon
in just a few years the world went from the prospect of nuclear war to a new era in which many people—including the Soviet and American leaders—saw a glimmering chance of eliminating nuclear weapons altogether.
16%
Flag icon
Few experts saw this coming. And yet it wasn’t long before most of those who didn’t see it coming grew convinced they knew exactly why it had happened, and what was coming next.
16%
Flag icon
My inner cynic started to suspect that no matter what had happened the experts would have been just as adept at downplaying their predictive failures and sketching an arc of history that made it appear they saw it coming all along.
17%
Flag icon
Take the problem of timelines. Obviously, a forecast without a time frame is absurd. And yet, forecasters routinely make them, as they did in that letter to Ben Bernanke.
17%
Flag icon
Similarly, forecasts often rely on implicit understandings of key terms rather than explicit definitions—
17%
Flag icon
By the time Kent retired from the CIA in 1967, he had profoundly shaped how the American intelligence community does what it calls intelligence analysis—
17%
Flag icon
The key word in Kent’s work is estimate. As Kent wrote, “estimating is what you do when you do not know.”
18%
Flag icon
analysts should narrow the range of their estimates whenever they can. And to avoid confusion, the terms they use should have designated numerical meanings, which Kent set out in a chart.
18%
Flag icon
But it was never adopted. People liked clarity and precision in principle but when it came time to make clear and precise forecasts they weren’t so keen on numbers.
18%
Flag icon
when forecasters are forced to translate terms like “serious possibility” into numbers, they have to think carefully about how they are thinking, a process known as metacognition.
18%
Flag icon
But a more fundamental obstacle to adopting numbers relates to accountability and what I call the wrong-side-of-maybe fallacy.
18%
Flag icon
If the forecast said there was a 70% chance of rain and it rains, people think the forecast was right; if it doesn’t rain, they think it was wrong. This simple mistake is extremely common.
19%
Flag icon
If the event happens, “a fair chance” can retroactively be stretched to mean something considerably bigger than 50%—so the forecaster nailed it.
19%
Flag icon
With perverse incentives like these, it’s no wonder people prefer rubbery words over firm numbers.
19%
Flag icon
If we are serious about measuring and improving, this won’t do. Forecasts must have clearly defined terms and timelines. They must use numbers. And one more thing is essential: we must have lots of forecasts.
19%
Flag icon
The many forecasts required for calibration calculations make it impractical to judge forecasts about rare events, and even with common events it means we must be patient data collectors—and cautious data interpreters.
19%
Flag icon
Perfection is godlike omniscience. It’s saying “this will happen” and it does, or “this won’t happen” and it doesn’t. The technical term for this is “resolution.”
20%
Flag icon
When we combine calibration and resolution, we get a scoring system that fully captures our sense of what good forecasters should do.
20%
Flag icon
In effect, Brier scores measure the distance between what you forecast and what actually happened. So Brier scores are like golf scores: lower is better. Perfection is 0.
20%
Flag icon
A forecast that is wrong to the greatest possible extent—saying there is a 100% chance that something will happen and it doesn’t, every time—scores a disastrous 2.0, as far from The Truth as it is possible to get.
20%
Flag icon
we have to interpret the meaning of the Brier scores, which requires two more things: benchmarks and comparability.
21%
Flag icon
Anonymity also ensured that participants would make their best guesses, uninfluenced by fear of embarrassment. The effects of public competition would have to wait for a future study.
21%
Flag icon
The final results appeared in 2005—twenty-one years, six presidential elections, and three wars after I sat on the National Research Council panel that got me thinking about forecasting.
21%
Flag icon
the average expert was roughly as accurate as a dart-throwing chimpanzee.
21%
Flag icon
So why did one group do better than the other? It wasn’t whether they had PhDs or access to classified information. Nor was it what they thought—whether they were liberals or conservatives, optimists or pessimists. The critical factor was how they thought.
21%
Flag icon
One group tended to organize their thinking around Big Ideas, although they didn’t agree on which Big Ideas were true or false.
21%
Flag icon
As ideologically diverse as they were, they were united by the fact that their thinking was so ideological. They sought to squeeze complex problems into the preferred cause-effect templates and treated what did not fit as irrelevant distractions.
21%
Flag icon
The other group consisted of more pragmatic experts who drew on many analytical tools, with the choice of tool hinging on the particular problem they faced. These experts gathered as much information from as many sources as they could.
21%
Flag icon
They talked about possibilities and probabilities, not certainties. And while no one likes to say “I was wrong,” these experts more readily admitted it and changed their minds.
21%
Flag icon
2,500-year-old Greek poetry attributed to the warrior-poet Archilochus: “The fox knows many things but the hedgehog knows one big thing.”
21%
Flag icon
I dubbed the Big Idea experts “hedgehogs” and the more eclectic experts “foxes.” Foxes beat hedgehogs.
21%
Flag icon
Foxes beat hedgehogs on both calibration and resolution. Foxes had real foresight. Hedgehogs didn’t.
22%
Flag icon
Kudlow’s one Big Idea is supply-side economics. When President George W. Bush followed the supply-side prescription by enacting substantial tax cuts, Kudlow was certain an economic boom of equal magnitude would follow.
22%
Flag icon
the hedgehog also “knows one big thing,” the Big Idea he uses over and over when trying to figure out what will happen next.