Superforecasting: The Art and Science of Prediction
Rate it:
Open Preview
Read between January 30 - February 15, 2024
3%
Flag icon
It was easiest to beat chance on the shortest-range questions that only required looking one year out, and accuracy fell off the further out experts tried to forecast—approaching the dart-throwing-chimpanzee level three to five years out. That was an important finding.
3%
Flag icon
“chaos theory”: in nonlinear systems like the atmosphere, even small changes in initial conditions can mushroom to enormous proportions.
5%
Flag icon
Unpredictability and predictability coexist uneasily in the intricately interlocking systems that make up our bodies, our societies, and the cosmos. How predictable something is depends on what we are trying to predict, how far into the future, and under what circumstances.
5%
Flag icon
The consumers of forecasting—governments, business, and the public—don’t demand evidence of accuracy. So there is no measurement. Which means no revision. And without revision, there can be no improvement.
5%
Flag icon
“I have been struck by how important measurement is to improving the human condition,” Bill Gates wrote. “You can achieve incredible progress if you set a clear goal and find a measure that will drive progress toward that goal….This may seem basic, but it is amazing how often it is not done and how hard it is to get right.”
11%
Flag icon
System 1 follows a primitive psycho-logic: if it feels true, it is.
12%
Flag icon
The human brain demands order. The world must make sense, which means we must be able to explain what we see and think. And we usually can—because we are creative confabulators hardwired to invent stories that impose coherence on the world.
13%
Flag icon
Formally, it’s called attribute substitution, but I call it bait and switch: when faced with a hard question, we often surreptitiously replace it with an easy one.
13%
Flag icon
tip-of-your-nose perspective.
14%
Flag icon
The tip-of-your-nose perspective can work wonders but it can also go terribly awry, so if you have the time to think before making a big decision, do so—and be prepared to accept that what seems obviously true now may turn out to be false later.
15%
Flag icon
The first step in learning what works in forecasting, and what doesn’t, is to judge forecasts, and to do that we can’t make assumptions about what the forecast means. We have to know.
19%
Flag icon
Forecasts must have clearly defined terms and timelines. They must use numbers. And one more thing is essential: we must have lots of forecasts.
22%
Flag icon
hedgehog forecasters first see things from the tip-of-your-nose perspective. That’s natural enough. But the hedgehog also “knows one big thing,” the Big Idea he uses over and over when trying to figure out what will happen next.
31%
Flag icon
Mauboussin notes that slow regression is more often seen in activities dominated by skill, while faster regression is more associated with chance.15
36%
Flag icon
a forecaster who starts by diving into the inside view risks being swayed by a number that may have little or no meaning. But if she starts with the outside view, her analysis will begin with an anchor that is meaningful. And a better anchor is a distinct advantage.
37%
Flag icon
Researchers have found that merely asking people to assume their initial judgment is wrong, to seriously consider why that might be, and then make another judgment, produces a second estimate which, when combined with the first, improves accuracy almost as much as getting a second estimate from another person.13 The same effect was produced simply by letting several weeks pass before asking people to make a second estimate.
43%
Flag icon
Epistemic uncertainty is something you don’t know but is, at least in theory, knowable. If you wanted to predict the workings of a mystery machine, skilled engineers could, in theory, pry it open and figure it out.
43%
Flag icon
Aleatory uncertainty is something you not only don’t know; it is unknowable. No matter how much you want to know whether it will rain in Philadelphia one year from now, no matter how many great meteorologists you consult, you can’t outguess the seasonal averages.
45%
Flag icon
A probabilistic thinker will be less distracted by “why” questions and focus on “how.”
46%
Flag icon
Unpack the question into components. Distinguish as sharply as you can between the known and unknown and leave no assumptions unscrutinized. Adopt the outside view and put the problem into a comparative perspective that downplays its uniqueness and treats it as a special case of a wider class of phenomena. Then adopt the inside view that plays up the uniqueness of the problem.
46%
Flag icon
Also explore the similarities and differences between your views and those of others—and pay special attention to prediction markets and other methods of extracting wisdom from crowds. Synthesize all these different views into a single vision as acute as that of a dragonfly.
46%
Flag icon
Finally, express your judgment as precisely as you can, using a finely graine...
This highlight has been truncated due to consecutive passage length restrictions.
47%
Flag icon
These stories suggest that if you spot potentially insightful new evidence, you should not hesitate to turn the ship’s wheel hard.
47%
Flag icon
So there are two dangers a forecaster faces after making the initial call. One is not giving enough weight to new information. That’s underreaction. The other danger is overreacting to new information, seeing it as more meaningful than it is, and adjusting a forecast too radically.
47%
Flag icon
“I think that the question I was really answering wasn’t ‘Will Abe visit Yasukuni?’ but ‘If I were PM of Japan, would I visit Yasukuni?’ ”3 That’s astute. And it should sound familiar: Bill recognized that he had unconsciously pulled a bait and switch on himself, substituting an easy question in place of a hard one. Having strayed from the real question, Bill dismissed the new information because it was irrelevant to his replacement question.
48%
Flag icon
“belief perseverance.” People can be astonishingly intransigent—and capable of rationalizing like crazy to avoid acknowledging new information that upsets their settled beliefs.
48%
Flag icon
The Yale professor Dan Kahan has done much research showing that our judgments about risks—Does gun control make us safer or put us in danger?—are driven less by a careful weighing of evidence than by our identities, which is why people’s views on gun control often correlate with their views on climate change, even though the two issues have no logical connection to each other.
49%
Flag icon
Psychologists call this the dilution effect, and given that stereotypes are themselves a source of bias we might say that diluting them is all to the good. Yes and no.
50%
Flag icon
superforecasters not only update more often than other forecasters, they update in smaller increments.
55%
Flag icon
Forecasters who use ambiguous language and rely on flawed memories to retrieve old forecasts don’t get clear feedback, which makes it impossible to learn from experience.
55%
Flag icon
The lesson he drew: “Be careful about making assumptions of expertise, ask experts if you can find them, reexamine your assumptions from time to time.”
56%
Flag icon
But when the scenarios implied that their correct forecast could easily have turned out wrong, they dismissed it as speculative. So experts were open to “I was almost right” scenarios but rejected “I was almost wrong” alternatives.
57%
Flag icon
Computer programmers have a wonderful term for a program that is not intended to be released in a final version but will instead be used, analyzed, and improved without end. It is “perpetual beta.”
58%
Flag icon
In Janis’s hypothesis, “members of any small cohesive group tend to maintain esprit de corps by unconsciously developing a number of shared illusions and related norms that interfere with critical thinking and reality testing.”3 Groups that get along too well don’t question assumptions or confront uncomfortable facts.
60%
Flag icon
bring in outsiders, suspend hierarchy, and keep the leader’s views under wraps. There’s also the “premortem,” in which the team is told to assume a course of action has failed and to explain why—which makes team members feel safe to express doubts they may have about the leader’s plan.
62%
Flag icon
“givers,” “matchers,” and “takers.” Givers are those who contribute more to others than they receive in return; matchers give as much as they get; takers give less than they take.
62%
Flag icon
Grant’s research shows that the pro-social example of the giver can improve the behavior of others, which helps everyone, including the giver—which explains why Grant has found that givers tend to come out on top.
63%
Flag icon
Confidence will be on everyone’s list. Leaders must be reasonably confident, and instill confidence in those they lead, because nothing can be accomplished without the belief that it can be.
63%
Flag icon
Decisiveness is another essential attribute. Leaders can’t ruminate endlessly.
63%
Flag icon
And leaders must deliver a vision—the goal that everyone strives ...
This highlight has been truncated due to consecutive passage length restrictions.
64%
Flag icon
“Once a course of action has been initiated it must not be abandoned without overriding reason,” the Wehrmacht manual stated. “In the changing situations of combat, however, inflexibly clinging to a course of action can lead to failure. The art of leadership consists of the timely recognition of circumstances and of the moment when a new decision is required.”
68%
Flag icon
The humility required for good judgment is not self-doubt
68%
Flag icon
It is intellectual humility. It is a recognition that reality is profoundly complex, that seeing things clearly is a constant struggle, when it can be done at all, and that human judgment must therefore be riddled with mistakes.
78%
Flag icon
That one tiny question doesn’t nail down the big question, but it does contribute a little insight. And if we ask many tiny-but-pertinent questions, we can close in on an answer for the big question.
78%
Flag icon
The answers are cumulative.
81%
Flag icon
Bear in mind the two basic errors it is possible to make here. We could fail to try to predict the potentially predictable or we could waste our time trying to predict the unpredictable. Which error would be worse in the situation you face?
81%
Flag icon
Decompose the problem into its knowable and unknowable parts. Flush ignorance into the open. Expose and examine your assumptions. Dare to be wrong by making your best guesses. Better to discover errors quickly than to hide them behind vague verbiage.
81%
Flag icon
Superforecasters are in the habit of posing the outside-view question: How often do things of this sort happen in situations of this sort?
82%
Flag icon
In classical dialectics, thesis meets antithesis, producing synthesis. In dragonfly eye, one view meets another and another and another—all of which must be synthesized into a single image.
82%
Flag icon
Synthesis is an art that requires reconciling irreducibly subjective judgments. If you do it well, engaging in this process of synthesizing should transform you from a cookie-cutter dove or hawk into an odd hybrid creature, a dove-hawk, with a nuanced view of when tougher or softer policies are likelier to work.
« Prev 1