More on this book
Community
Kindle Notes & Highlights
Read between
January 30 - February 15, 2024
It was easiest to beat chance on the shortest-range questions that only required looking one year out, and accuracy fell off the further out experts tried to forecast—approaching the dart-throwing-chimpanzee level three to five years out. That was an important finding.
“chaos theory”: in nonlinear systems like the atmosphere, even small changes in initial conditions can mushroom to enormous proportions.
Unpredictability and predictability coexist uneasily in the intricately interlocking systems that make up our bodies, our societies, and the cosmos. How predictable something is depends on what we are trying to predict, how far into the future, and under what circumstances.
The consumers of forecasting—governments, business, and the public—don’t demand evidence of accuracy. So there is no measurement. Which means no revision. And without revision, there can be no improvement.
“I have been struck by how important measurement is to improving the human condition,” Bill Gates wrote. “You can achieve incredible progress if you set a clear goal and find a measure that will drive progress toward that goal….This may seem basic, but it is amazing how often it is not done and how hard it is to get right.”
System 1 follows a primitive psycho-logic: if it feels true, it is.
The human brain demands order. The world must make sense, which means we must be able to explain what we see and think. And we usually can—because we are creative confabulators hardwired to invent stories that impose coherence on the world.
Formally, it’s called attribute substitution, but I call it bait and switch: when faced with a hard question, we often surreptitiously replace it with an easy one.
tip-of-your-nose perspective.
The tip-of-your-nose perspective can work wonders but it can also go terribly awry, so if you have the time to think before making a big decision, do so—and be prepared to accept that what seems obviously true now may turn out to be false later.
The first step in learning what works in forecasting, and what doesn’t, is to judge forecasts, and to do that we can’t make assumptions about what the forecast means. We have to know.
Forecasts must have clearly defined terms and timelines. They must use numbers. And one more thing is essential: we must have lots of forecasts.
hedgehog forecasters first see things from the tip-of-your-nose perspective. That’s natural enough. But the hedgehog also “knows one big thing,” the Big Idea he uses over and over when trying to figure out what will happen next.
Mauboussin notes that slow regression is more often seen in activities dominated by skill, while faster regression is more associated with chance.15
a forecaster who starts by diving into the inside view risks being swayed by a number that may have little or no meaning. But if she starts with the outside view, her analysis will begin with an anchor that is meaningful. And a better anchor is a distinct advantage.
Researchers have found that merely asking people to assume their initial judgment is wrong, to seriously consider why that might be, and then make another judgment, produces a second estimate which, when combined with the first, improves accuracy almost as much as getting a second estimate from another person.13 The same effect was produced simply by letting several weeks pass before asking people to make a second estimate.
Epistemic uncertainty is something you don’t know but is, at least in theory, knowable. If you wanted to predict the workings of a mystery machine, skilled engineers could, in theory, pry it open and figure it out.
Aleatory uncertainty is something you not only don’t know; it is unknowable. No matter how much you want to know whether it will rain in Philadelphia one year from now, no matter how many great meteorologists you consult, you can’t outguess the seasonal averages.
A probabilistic thinker will be less distracted by “why” questions and focus on “how.”
Unpack the question into components. Distinguish as sharply as you can between the known and unknown and leave no assumptions unscrutinized. Adopt the outside view and put the problem into a comparative perspective that downplays its uniqueness and treats it as a special case of a wider class of phenomena. Then adopt the inside view that plays up the uniqueness of the problem.
Also explore the similarities and differences between your views and those of others—and pay special attention to prediction markets and other methods of extracting wisdom from crowds. Synthesize all these different views into a single vision as acute as that of a dragonfly.
Finally, express your judgment as precisely as you can, using a finely graine...
This highlight has been truncated due to consecutive passage length restrictions.
These stories suggest that if you spot potentially insightful new evidence, you should not hesitate to turn the ship’s wheel hard.
So there are two dangers a forecaster faces after making the initial call. One is not giving enough weight to new information. That’s underreaction. The other danger is overreacting to new information, seeing it as more meaningful than it is, and adjusting a forecast too radically.
“I think that the question I was really answering wasn’t ‘Will Abe visit Yasukuni?’ but ‘If I were PM of Japan, would I visit Yasukuni?’ ”3 That’s astute. And it should sound familiar: Bill recognized that he had unconsciously pulled a bait and switch on himself, substituting an easy question in place of a hard one. Having strayed from the real question, Bill dismissed the new information because it was irrelevant to his replacement question.
“belief perseverance.” People can be astonishingly intransigent—and capable of rationalizing like crazy to avoid acknowledging new information that upsets their settled beliefs.
The Yale professor Dan Kahan has done much research showing that our judgments about risks—Does gun control make us safer or put us in danger?—are driven less by a careful weighing of evidence than by our identities, which is why people’s views on gun control often correlate with their views on climate change, even though the two issues have no logical connection to each other.
Psychologists call this the dilution effect, and given that stereotypes are themselves a source of bias we might say that diluting them is all to the good. Yes and no.
superforecasters not only update more often than other forecasters, they update in smaller increments.
Forecasters who use ambiguous language and rely on flawed memories to retrieve old forecasts don’t get clear feedback, which makes it impossible to learn from experience.
The lesson he drew: “Be careful about making assumptions of expertise, ask experts if you can find them, reexamine your assumptions from time to time.”
But when the scenarios implied that their correct forecast could easily have turned out wrong, they dismissed it as speculative. So experts were open to “I was almost right” scenarios but rejected “I was almost wrong” alternatives.
Computer programmers have a wonderful term for a program that is not intended to be released in a final version but will instead be used, analyzed, and improved without end. It is “perpetual beta.”
In Janis’s hypothesis, “members of any small cohesive group tend to maintain esprit de corps by unconsciously developing a number of shared illusions and related norms that interfere with critical thinking and reality testing.”3 Groups that get along too well don’t question assumptions or confront uncomfortable facts.
bring in outsiders, suspend hierarchy, and keep the leader’s views under wraps. There’s also the “premortem,” in which the team is told to assume a course of action has failed and to explain why—which makes team members feel safe to express doubts they may have about the leader’s plan.
“givers,” “matchers,” and “takers.” Givers are those who contribute more to others than they receive in return; matchers give as much as they get; takers give less than they take.
Grant’s research shows that the pro-social example of the giver can improve the behavior of others, which helps everyone, including the giver—which explains why Grant has found that givers tend to come out on top.
Confidence will be on everyone’s list. Leaders must be reasonably confident, and instill confidence in those they lead, because nothing can be accomplished without the belief that it can be.
Decisiveness is another essential attribute. Leaders can’t ruminate endlessly.
And leaders must deliver a vision—the goal that everyone strives ...
This highlight has been truncated due to consecutive passage length restrictions.
“Once a course of action has been initiated it must not be abandoned without overriding reason,” the Wehrmacht manual stated. “In the changing situations of combat, however, inflexibly clinging to a course of action can lead to failure. The art of leadership consists of the timely recognition of circumstances and of the moment when a new decision is required.”
The humility required for good judgment is not self-doubt
It is intellectual humility. It is a recognition that reality is profoundly complex, that seeing things clearly is a constant struggle, when it can be done at all, and that human judgment must therefore be riddled with mistakes.
That one tiny question doesn’t nail down the big question, but it does contribute a little insight. And if we ask many tiny-but-pertinent questions, we can close in on an answer for the big question.
The answers are cumulative.
Bear in mind the two basic errors it is possible to make here. We could fail to try to predict the potentially predictable or we could waste our time trying to predict the unpredictable. Which error would be worse in the situation you face?
Decompose the problem into its knowable and unknowable parts. Flush ignorance into the open. Expose and examine your assumptions. Dare to be wrong by making your best guesses. Better to discover errors quickly than to hide them behind vague verbiage.
Superforecasters are in the habit of posing the outside-view question: How often do things of this sort happen in situations of this sort?
In classical dialectics, thesis meets antithesis, producing synthesis. In dragonfly eye, one view meets another and another and another—all of which must be synthesized into a single image.
Synthesis is an art that requires reconciling irreducibly subjective judgments. If you do it well, engaging in this process of synthesizing should transform you from a cookie-cutter dove or hawk into an odd hybrid creature, a dove-hawk, with a nuanced view of when tougher or softer policies are likelier to work.