Superforecasting: The Art and Science of Prediction
Rate it:
Open Preview
73%
Flag icon
Now comes the hardest-to-grasp part of Taleb’s view of the world. He posits that historical probabilities—all the possible ways the future could unfold—are distributed like wealth, not height. That means our world is vastly more volatile than most of us realize and we are at risk of grave miscalculations.
73%
Flag icon
if you are as mathematically inclined as Taleb, you get used to the idea that the world we live in is but one that emerged, quasi-randomly, from a vast population of once-possible worlds. The past did not have to unfold as it did, the present did not have to be what it is, and the future is wide open.
74%
Flag icon
Immersion in what-if history can give us a visceral feeling for Taleb’s vision of radical indeterminacy.
74%
Flag icon
On the one hand, the hindsight-tainted analyses that dominate commentary after major events are a dead end.
74%
Flag icon
On the other hand, our expectations of the future are derived from our mental models of how the world works, and every event is an opportunity to learn and improve those models.
74%
Flag icon
Tournaments help researchers learn what improves forecasting and help forecasters sharpen their skills with practice and feedback.
74%
Flag icon
Just as we now expect a pill to have been tested in peer-reviewed experiments before we swallow it, we will expect forecasters to establish the accuracy of their forecasting with rigorous testing before we heed their advice.
75%
Flag icon
a Boston doctor named Ernest Amory Codman had an idea similar in spirit to forecaster scorekeeping. He called it the End Result System.
75%
Flag icon
“Codman’s plan disregarded a physician’s clinical reputation or social standing as well as bedside manner or technical skills,” noted the historian Ira Rutkow. “All that counted were the clinical consequences of a doctor’s effort.”
76%
Flag icon
Ernest Codman, Archie Cochrane, and many others overcame entrenched interests. They did it not by storming the ramparts. They did it with reason and a relentless focus on the singular goal of making the sick well.
76%
Flag icon
The same thinking is transforming charitable foundations, which are increasingly making funding contingent on stringent program evaluation.
76%
Flag icon
both athletes and teams have improved dramatically over the last thirty or forty years. In part, that’s because the stakes are bigger. But it also happened because what they are doing has increasingly become evidence based.
76%
Flag icon
“A key part of the ‘performance revolution’ in sports, then, is the story of how organizations, in a systematic way, set about making employees more effective and productive.”
76%
Flag icon
an evidence-based forecasting movement would not be a startling change springing up out of nothing. It would be another manifestation of a broad and deep shift away from decision making based on experience, intuition, and authority—“
77%
Flag icon
Far too many people treat numbers like sacred totems offering divine insight. The truly numerate know that numbers are tools, nothing more, and their quality can range from wretched to superb.
77%
Flag icon
What matters is the big question, but the big question can’t be scored. The little question doesn’t matter but it can be scored, so the IARPA tournament went with it.
78%
Flag icon
I call this Bayesian question clustering because of its family resemblance to the Bayesian updating
78%
Flag icon
In future research, I want to develop the concept and see how effectively we can answer unscorable “big questions” with clusters of little ones.
78%
Flag icon
one way to identify a good question is what I call the smack-the-forehead test: when you read the question after time has passed, you smack your forehead and say, “If only I had thought of that before!”
78%
Flag icon
While we may assume that a superforecaster would also be a superquestioner, and vice versa, we don’t actually know that. Indeed, my best scientific guess is that they often are not.
78%
Flag icon
superb question generation often seems to accompany a hedgehog-like incisiveness and confidence that one has a Big Idea grasp of the deep drivers of an event.
78%
Flag icon
Superforecasters and superquestioners need to acknowledge each other’s complementary strengths, not dwell on each other’s alleged weaknesses.
79%
Flag icon
A major point of view rarely has zero merit, and if a forecasting contest produces a split decision we will have learned that the reality is more mixed than either side thought. If learning, not gloating, is the goal, that is progress.
80%
Flag icon
The guidelines sketched here distill key themes in this book and in training systems that have been experimentally demonstrated to boost accuracy in real-world forecasting contests.
80%
Flag icon
(1) Triage. Focus on questions where your hard work is likely to pay off.
81%
Flag icon
(2) Break seemingly intractable problems into tractable sub-problems.
81%
Flag icon
Decompose the problem into its knowable and unknowable parts. Flush ignorance into the open. Expose and examine your assumptions. Dare to be wrong by making your best guesses. Better to discover errors quickly than to hide them behind vague verbiage.
81%
Flag icon
The surprise is how often remarkably good probability estimates arise from a remarkably crude series of assumptions and guesstimates.
81%
Flag icon
(3) Strike the right balance between inside and outside views.
81%
Flag icon
Superforecasters are in the habit of posing the outside-view question: How often do things of this sort happen in situations of this sort?
81%
Flag icon
Summers’s strategy: he doubled the employee’s estimate, then moved to the next higher time unit. “So, if the research assistant says the task will take an hour, it will take two days. If he says two days, it will take four weeks.”
81%
Flag icon
(4) Strike the right balance between under- and overreacting to evidence.
81%
Flag icon
Skillful updating requires teasing subtle signals from noisy news flows—all the while resisting the lure of wishful thinking.
81%
Flag icon
Yet superforecasters also know how to jump, to move their probability estimates fast in response to diagnostic signals.
82%
Flag icon
(5) Look for the clashing causal forces at work in each problem. For every good policy argument, there is typically a counterargument that is at least worth acknowledging.
82%
Flag icon
In classical dialectics, thesis meets antithesis, producing synthesis.
82%
Flag icon
Synthesis is an art that requires reconciling irreducibly subjective judgments.
82%
Flag icon
(6) Strive to distinguish as many degrees of doubt as the problem permits but no more.
82%
Flag icon
(7) Strike the right balance between under- and overconfidence, between prudence and decisiveness. Superforecasters understand the risks both of rushing to judgment and of dawdling too long near “maybe.”
82%
Flag icon
(8) Look for the errors behind your mistakes but beware of rearview-mirror hindsight biases.
82%
Flag icon
Conduct unflinching postmortems: Where exactly did I go wrong?
82%
Flag icon
Also don’t forget to do postmortems on your successes too. Not all successes imply that your reasoning was right. You may have just lucked out by making offsetting errors.
82%
Flag icon
(9) Bring out the best in others and let others bring out the best in you.
82%
Flag icon
(10) Master the error-balancing bicycle. Implementing each commandment requires balancing opposing errors.
83%
Flag icon
(11) Don’t treat commandments as commandments. “It is impossible to lay down binding rules,” Helmuth von Moltke warned, “because two cases will never be exactly the same.”
86%
Flag icon
Brier scoring imposes reputational penalties for overconfidence that are tied to the financial penalties that gamblers would incur from the same errors. If you aren’t willing to bet on the odds implied by your probability estimate, rethink your estimate.
87%
Flag icon
Psychologists call this the fundamental attribution error. We are fully aware that situational factors—like insomnia—can influence our own behavior, and we rightly attribute our behavior to those factors, but we routinely don’t make the same allowance for others and instead assume that their behavior reflects who they are.
88%
Flag icon
I doubt it is because superforecasters are smarter or more open-minded. I suspect they did better because they treat forecasting as a cultivatable skill whereas analysts work inside an organization that treats prediction as a sideshow, not part of the analyst’s real job.
89%
Flag icon
It is amazing how many arbitrary assumptions underlie pretty darn good forecasts. Our choice is not whether to engage in crude guesswork; it is whether to do it overtly or covertly.
91%
Flag icon
This aversion to uncertainty underlies the Ellsberg paradox,