Superforecasting: The Art and Science of Prediction
Rate it:
Open Preview
6%
Flag icon
Foresight isn’t a mysterious gift bestowed at birth. It is the product of particular ways of thinking, of gathering information, of updating beliefs. These habits of thought can be learned and cultivated by any intelligent, thoughtful, determined person. It may not even be all that hard to get started.
6%
Flag icon
The difference between heavyweights and amateurs, she said, is that the heavyweights know the difference between a 60⁄40 bet and a 40⁄60 bet.
6%
Flag icon
superforecasting demands thinking that is open-minded, careful, curious, and—above all—self-critical. It also demands focus. The kind of thinking that produces superior judgment does not come effortlessly. Only the determined can deliver it reasonably consistently, which is why our analyses have consistently found commitment to self-improvement to be the strongest predictor of performance.
7%
Flag icon
We have all been too quick to make up our minds and too slow to change them. And if we don’t examine how we make these mistakes, we will keep making them.
11%
Flag icon
Scientists must be able to answer the question “What would convince me I am wrong?” If they can’t, it’s a sign they have grown too attached to their beliefs.
13%
Flag icon
“Without those opportunities to learn, a valid intuition can only be due to a lucky accident or to magic,”
13%
Flag icon
The tip-of-your-nose perspective can work wonders but it can also go terribly awry, so if you have the time to think before making a big decision, do so—and be prepared to accept that what seems obviously true now may turn out to be false later.
13%
Flag icon
The first step in learning what works in forecasting, and what doesn’t, is to judge forecasts, and to do that we can’t make assumptions about what the forecast means. We have to know. There can’t be any ambiguity about whether a forecast is accurate or not
16%
Flag icon
expressing a probability estimate with a number may imply to the reader that it is an objective fact, not the subjective judgment it is. That is a danger. But the answer is not to do away with numbers. It’s to inform readers that numbers, just like words, only express estimates—opinions—and nothing more.
20%
Flag icon
How well aggregation works depends on what you are aggregating. Aggregating the judgments of many people who know nothing produces a lot of nothing. Aggregating the judgments of people who know a little is better, and if there are enough of them, it can produce impressive results, but aggregating the judgments of an equal number of people who know lots about lots of different things is most effective because the collective pool of information becomes much bigger. Aggregations of aggregations can also yield impressive results.
21%
Flag icon
look at how foxes approach forecasting. They deploy not one analytical idea but many and seek out information not from one source but many. Then they synthesize it all into a single conclusion. In a word, they aggregate. They may be individuals working alone, but what they do is, in principle, no different from what Galton’s crowd did. They integrate perspectives and the information contained within them.
23%
Flag icon
replace the tough question with the easy one, answer it, and then sincerely believe that we have answered the tough question. This particular bait and switch—replacing “Was it a good decision?” with “Did it have a good outcome?”—is both popular and pernicious.
27%
Flag icon
Some statistical concepts are both easy to understand and easy to forget. Regression to the mean is one of them. Let’s say the average height of men is five feet eight inches. Now imagine a man who is six feet tall and then picture his adult son. Your initial System 1 hunch may be that the son is also six feet. That’s possible, but unlikely. To see why, we have to engage in some strenuous System 2 reasoning. Imagine that we knew everyone’s
27%
Flag icon
height and computed the correlation between the heights of fathers and sons. We would find a strong but imperfect relationship, a correlation of about 0.5, as captured by the line running through the data points in the chart below. It tells us that when the father is six feet, we should make a compromise prediction based on both the father’s height and the population average. Our best guess for the son is five feet ten. The son’s height has regressed toward the mean by two inches, halfway between the population average and the father’s height.
28%
Flag icon
regression to the mean is an indispensable tool for testing the role of luck in performance: Mauboussin notes that slow regression is more often seen in activities dominated by skill, while faster regression is more associated with chance.
28%
Flag icon
To illustrate, imagine two people in the IARPA tournament, Frank and Nancy. In year 1, Frank does horribly but Nancy is outstanding. On the bell curve below, Frank is ranked in the bottom 1% and Nancy in the top 99%. If their results were caused entirely by luck—like coin flipping—then in year 2 we would expect both Frank and Nancy to regress all the way back to 50%. If their results were equal parts luck and skill, we would expect halfway regression: Frank should rise to around 25% (between 1% and 50%) and Nancy fall to around 75% (between 50% and 99%). If their results were entirely decided ...more
28%
Flag icon
as awful in year 2 and Nancy would be just as spectacular.
28%
Flag icon
we should not treat the superstars of any given year as infallible, not even Doug Lorch. Luck plays a role and it is only to be expected that the superstars will occasionally have a bad year and produce ordinary results—just as superstar athletes occasionally look less than stellar. But more basically, and more hopefully, we can conclude that the superforecasters were not just lucky. Mostly, their results reflected skill.
29%
Flag icon
Random selection ensures a sample is representative of the population from which it’s drawn.
32%
Flag icon
Statisticians call that the base rate—how common something is within a broader class. Daniel Kahneman has a much more evocative visual term for it. He calls it the “outside view”—in contrast to the “inside view,”
32%
Flag icon
It’s natural to be drawn to the inside view. It’s usually concrete and filled with engaging detail we can use to craft a story about what’s going on. The outside view is typically abstract, bare, and doesn’t lend itself so readily to storytelling.
33%
Flag icon
You may wonder why the outside view should come first. After all, you could dive into the inside view and draw conclusions, then turn to the outside view. Wouldn’t that work as well? Unfortunately, no, it probably wouldn’t. The reason is a basic psychological concept called anchoring. When we make estimates, we tend to start with some number and adjust. The number we start with is called the anchor. It’s important because we typically underadjust, which means a bad anchor can easily produce a bad estimate. And it’s astonishingly easy to settle on a bad anchor.
33%
Flag icon
a forecaster who starts by diving into the inside view risks being swayed by a number that may have little or no meaning. But if she starts with the outside view, her analysis will begin with an anchor that is meaningful. And a better anchor is a distinct advantage.
33%
Flag icon
A good exploration of the inside view does not involve wandering around, soaking up any and all information and hoping that insight somehow emerges. It is targeted and purposeful: it is an investigation, not an amble.11
34%
Flag icon
Researchers have found that merely asking people to assume their initial judgment is wrong, to seriously consider why that might be, and then make another judgment, produces a second estimate which, when combined with the first, improves accuracy almost as much as getting a second estimate from another person.
34%
Flag icon
Outside views, inside view, other outside and inside views, second opinions from yourself … that’s a lot of perspectives—and inevitably a lot of dissonant information.
34%
Flag icon
That is “dragonfly eye” in operation. And yes, it is mentally demanding. Superforecasters pursue point-counterpoint discussions routinely, and they keep at them long past the point where most people would succumb to migraines.
34%
Flag icon
A brilliant puzzle solver may have the raw material for forecasting, but if he doesn’t also have an appetite for questioning basic, emotionally charged beliefs he will often be at a disadvantage relative to a less intelligent person who has a greater capacity for self-critical thinking. It’s not the raw crunching power you have that matters most. It’s what you do with it.
34%
Flag icon
Baron’s test for AOM asks whether you agree or disagree with statements like: People should take into consideration evidence that goes against their beliefs. It is more useful to pay attention to those who disagree with you than to pay attention to those who agree. Changing your mind is a sign of weakness.
35%
Flag icon
Intuition is the best guide in making decisions. It is important to persevere in your beliefs even when evidence is brought to bear against them.
35%
Flag icon
For superforecasters, beliefs are hypotheses to be tested, not treasures to be guarded.
37%
Flag icon
ignorance prior, the state of knowledge you are in before you know whether the coin will land heads or tails
37%
Flag icon
A parent willing to pay something to reduce her child’s risk of contracting a serious disease from 10% to 5% may be willing to pay two to three times as much to reduce the risk from 5% to 0%. Why is a decline from 5% to 0% so much more valuable than a decline from 10% to 5%? Because it delivers more than a 5% reduction in risk. It delivers certainty. Both 0% and 100% weigh far more heavily in our minds than the mathematical models of economists say they should.
38%
Flag icon
people equate confidence and competence, which makes the forecaster who says something has a middling probability of happening less worthy of respect.
39%
Flag icon
Epistemic uncertainty is something you don’t know but is, at least in theory, knowable.
39%
Flag icon
Aleatory uncertainty is something you not only don’t know; it is unknowable.
39%
Flag icon
Aleatory uncertainty ensures life will always have surprises, regardless of how carefully we plan. Superforecasters grasp this deep truth better than most.
41%
Flag icon
the extent that we allow our thoughts to move in the direction of fate, we undermine our ability to think probabilistically.
42%
Flag icon
Unpack the question into components. Distinguish as sharply as you can between the known and unknown and leave no assumptions unscrutinized. Adopt the outside view and put the problem into a comparative perspective that downplays its uniqueness and treats it as a special case of a wider class of phenomena. Then adopt the inside view that plays up the uniqueness of the problem. Also explore the similarities and differences between your views and those of others—and pay special attention to prediction markets and other methods of extracting wisdom from crowds. Synthesize all these different ...more
44%
Flag icon
belief perseverance.” People can be astonishingly intransigent—and capable of rationalizing like crazy to avoid acknowledging new information that upsets their settled beliefs.
44%
Flag icon
The fact that what I expected to happen didn’t happen proves that it will.
44%
Flag icon
the brain likes things neat and orderly and once it has things that way it tries to keep disturbances to a minimum.
45%
Flag icon
People base their estimate on what they think is a useful tidbit of information. Then they encounter clearly irrelevant information—meaningless noise—which they indisputably should ignore. But they don’t.
47%
Flag icon
In his famous essay “Politics and the English Language,” George Orwell concluded with six emphatic rules, including
47%
Flag icon
“never use a long word where a short one will do” and “never use the passive where you can use the active.” But the sixth rule was the key: “Break any of these rules sooner than saying anything outright barbarous.”
49%
Flag icon
To learn from failure, we must know when we fail.
51%
Flag icon
Grit is passionate perseverance of long-term goals, even in the face of frustration and failure. Married with a growth mindset, it is a potent force for personal progress.
52%
Flag icon
we can now sketch a rough composite portrait of the modal superforecaster. In philosophic outlook, they tend to be: CAUTIOUS: Nothing is certain HUMBLE: Reality is infinitely complex NONDETERMINISTIC: What happens is not meant to be and does not have to happen In their abilities and thinking styles, they tend to be: ACTIVELY OPEN-MINDED: Beliefs are hypotheses to be tested, not treasures to be protected INTELLIGENT AND KNOWLEDGEABLE, WITH A “NEED FOR COGNITION”: Intellectually curious, enjoy puzzles and mental challenges REFLECTIVE: Introspective and self-critical NUMERATE: Comfortable with ...more
52%
Flag icon
PRAGMATIC: Not wedded to any idea or agenda ANALYTICAL: Capable of stepping back from the tip-of-your-nose perspective and considering other views DRAGONFLY-EYED: Value diverse views and synthesize Judge using many grades of maybe THOUGHTFUL UPDATERS: When facts change, they change their minds GOOD INTUITIVE PSYCHOLOGISTS: Aware of the value of checking thinking for cognitive and emotional biases In their work ethic, they tend to have: A GROWTH MINDSET: Believe it’s possible to get better GRIT: Determined to keep at it however long it takes
54%
Flag icon
Practice “constructive confrontation,” to use the phrase of Andy Grove, the former CEO of Intel. Precision questioning is one way to do that. Drawing on the work of Dennis Matthies and Monica Worline, we showed them how to tactfully dissect the vague claims people often make. Suppose someone says, “Unfortunately, the popularity of soccer, the world’s favorite pastime, is starting to decline.” You suspect he is wrong. How do you question the claim? Don’t even think of taking a personal shot like “You’re silly.” That only adds heat, not light. “I don’t think so” only expresses disagreement ...more
« Prev 1