More on this book
Community
Kindle Notes & Highlights
It’s a rare day when a journalist says, “The market rose today for any one of a hundred different reasons, or a mix of them, so no one knows.” Instead, like a split-brain patient asked why he is pointing at a picture of a shovel when he has no idea why, the journalist conjures a plausible story from whatever is at hand.
The problem is that we move too fast from confusion and uncertainty (“I have no idea why my hand is pointed at a picture of a shovel”) to a clear and confident conclusion (“Oh, that’s simple”) without spending any time in between (“This is one possible explanation but there are others”).
The interplay between System 1 and System 2 can be subtle and creative. But scientists are trained to be cautious. They know that no matter how tempting it is to anoint a pet hypothesis as The Truth, alternative explanations must get a hearing.
Scientists must be able to answer the question “What would convince me I am wrong?” If they can’t, it’s a sign they have grown too attached to their beliefs.
our natural inclination is to grab on to the first plausible explanation and happily gather supportive evidence without checking its reliability. That is what psychologists call confirmation bias.
This is a poor way to build an accurate mental model of a complicated world, but it’s a superb way to satisfy the brain’s desire for order because it yields tidy explanations with no loose ends.
Formally, it’s called attribute substitution, but I call it bait and switch: when faced with a hard question, we often surreptitiously replace it with an easy one.
blink-think is another false dichotomy. The choice isn’t either/or, it is how to blend them in evolving situations.
Drawing such seemingly different conclusions about snap judgments, Kahneman and Klein could have hunkered down and fired off rival polemics. But, like good scientists, they got together to solve the puzzle.
With training or experience, people can encode patterns deep in their memories in vast number and intricate detail—such as the estimated fifty thousand to one hundred thousand chess positions that top players have in their repertoire.
Whether intuition generates delusion or insight depends on whether you work in a world full of valid cues you can unconsciously register for future use.
Learning the cues is a matter of opportunity and effort. Sometimes learning the cues is easy.
“Without those opportunities to learn, a valid intuition can only be due to a lucky accident or to magic,” Kahneman and Klein conclude, “and we do not believe in magic.”
Carlsen respects his intuition, as well he should, but he also does a lot of “double-checking” because he knows that sometimes intuition can let him down and conscious thought can improve his judgment.
The first step in learning what works in forecasting, and what doesn’t, is to judge forecasts, and to do that we can’t make assumptions about what the forecast means.
It is far from unusual that a forecast that at first looks as clear as a freshly washed window proves too opaque to be conclusively judged right or wrong.
in just a few years the world went from the prospect of nuclear war to a new era in which many people—including the Soviet and American leaders—saw a glimmering chance of eliminating nuclear weapons altogether.
Few experts saw this coming. And yet it wasn’t long before most of those who didn’t see it coming grew convinced they knew exactly why it had happened, and what was coming next.
My inner cynic started to suspect that no matter what had happened the experts would have been just as adept at downplaying their predictive failures and sketching an arc of history that made it appear they saw it coming all along.
Take the problem of timelines. Obviously, a forecast without a time frame is absurd. And yet, forecasters routinely make them, as they did in that letter to Ben Bernanke.
Similarly, forecasts often rely on implicit understandings of key terms rather than explicit definitions—
By the time Kent retired from the CIA in 1967, he had profoundly shaped how the American intelligence community does what it calls intelligence analysis—
The key word in Kent’s work is estimate. As Kent wrote, “estimating is what you do when you do not know.”
analysts should narrow the range of their estimates whenever they can. And to avoid confusion, the terms they use should have designated numerical meanings, which Kent set out in a chart.
But it was never adopted. People liked clarity and precision in principle but when it came time to make clear and precise forecasts they weren’t so keen on numbers.
when forecasters are forced to translate terms like “serious possibility” into numbers, they have to think carefully about how they are thinking, a process known as metacognition.
But a more fundamental obstacle to adopting numbers relates to accountability and what I call the wrong-side-of-maybe fallacy.
If the forecast said there was a 70% chance of rain and it rains, people think the forecast was right; if it doesn’t rain, they think it was wrong. This simple mistake is extremely common.
If the event happens, “a fair chance” can retroactively be stretched to mean something considerably bigger than 50%—so the forecaster nailed it.
With perverse incentives like these, it’s no wonder people prefer rubbery words over firm numbers.
If we are serious about measuring and improving, this won’t do. Forecasts must have clearly defined terms and timelines. They must use numbers. And one more thing is essential: we must have lots of forecasts.
The many forecasts required for calibration calculations make it impractical to judge forecasts about rare events, and even with common events it means we must be patient data collectors—and cautious data interpreters.
Perfection is godlike omniscience. It’s saying “this will happen” and it does, or “this won’t happen” and it doesn’t. The technical term for this is “resolution.”
When we combine calibration and resolution, we get a scoring system that fully captures our sense of what good forecasters should do.
In effect, Brier scores measure the distance between what you forecast and what actually happened. So Brier scores are like golf scores: lower is better. Perfection is 0.
A forecast that is wrong to the greatest possible extent—saying there is a 100% chance that something will happen and it doesn’t, every time—scores a disastrous 2.0, as far from The Truth as it is possible to get.
we have to interpret the meaning of the Brier scores, which requires two more things: benchmarks and comparability.
Anonymity also ensured that participants would make their best guesses, uninfluenced by fear of embarrassment. The effects of public competition would have to wait for a future study.
The final results appeared in 2005—twenty-one years, six presidential elections, and three wars after I sat on the National Research Council panel that got me thinking about forecasting.
the average expert was roughly as accurate as a dart-throwing chimpanzee.
So why did one group do better than the other? It wasn’t whether they had PhDs or access to classified information. Nor was it what they thought—whether they were liberals or conservatives, optimists or pessimists. The critical factor was how they thought.
One group tended to organize their thinking around Big Ideas, although they didn’t agree on which Big Ideas were true or false.
As ideologically diverse as they were, they were united by the fact that their thinking was so ideological. They sought to squeeze complex problems into the preferred cause-effect templates and treated what did not fit as irrelevant distractions.
The other group consisted of more pragmatic experts who drew on many analytical tools, with the choice of tool hinging on the particular problem they faced. These experts gathered as much information from as many sources as they could.
They talked about possibilities and probabilities, not certainties. And while no one likes to say “I was wrong,” these experts more readily admitted it and changed their minds.
2,500-year-old Greek poetry attributed to the warrior-poet Archilochus: “The fox knows many things but the hedgehog knows one big thing.”
I dubbed the Big Idea experts “hedgehogs” and the more eclectic experts “foxes.” Foxes beat hedgehogs.
Foxes beat hedgehogs on both calibration and resolution. Foxes had real foresight. Hedgehogs didn’t.
Kudlow’s one Big Idea is supply-side economics. When President George W. Bush followed the supply-side prescription by enacting substantial tax cuts, Kudlow was certain an economic boom of equal magnitude would follow.
the hedgehog also “knows one big thing,” the Big Idea he uses over and over when trying to figure out what will happen next.

