More on this book
Community
Kindle Notes & Highlights
So the hedgehog’s one Big Idea doesn’t improve his foresight. It distorts it. And more information doesn’t help because it’s all seen through the same tinted glasses.
When hedgehogs in the EPJ research made forecasts on the subjects they knew the most about—their own specialties—their accuracy declined.
an inverse correlation between fame and accuracy: the more famous an expert was, the less accurate he was. That’s not because editors, producers, and the public go looking for bad forecasters. They go looking for hedgehogs, who just happen to be bad forecasters. Animated by a Big Idea, hedgehogs tell tight, simple, clear stories that grab and hold audiences.
Better still, hedgehogs are confident. With their one-perspective analysis, hedgehogs can pile up reasons why they are right—“
Foxes don’t fare so well in the media. They’re less confident, less likely to say something is “certain” or “impossible,” and are likelier to settle on shades of “maybe.”
This aggregation of many perspectives is bad TV. But it’s good forecasting. Indeed, it’s essential.
James Surowiecki’s bestseller The Wisdom of Crowds. Aggregating the judgment of many consistently beats the accuracy of the average member of the group,
There will be individuals who beat the group in each repetition, but they will tend to be different individuals. Beating the average consistently requires rare skill.
Some reverently call it the miracle of aggregation but it is easy to demystify. The key is recognizing that useful information is often dispersed widely, with one person possessing a scrap, another holding a more important piece, a third having a few bits, and so on.
With valid information piling up and errors nullifying themselves, the net result was an astonishingly accurate estimate.
Aggregating the judgments of many people who know nothing produces a lot of nothing.
aggregating the judgments of an equal number of people who know lots about lots of different things is most effective because the collective pool of information becomes much bigger.
Now look at how foxes approach forecasting. They deploy not one analytical idea but many and seek out information not from one source but many. Then they synthesize it all into a single conclusion. In a word, they aggregate.
They integrate perspectives and the information contained within them. The only real difference is that the process occurs within one skull.
Consider a guess-the-number game in which players must guess a number between 0 and 100. The person whose guess comes closest to two-thirds of the average guess of all contestants wins.
Because the contestants are aware of each other, and aware that they are aware, the number is going to keep shrinking until it hits the point where it can no longer shrink. That point is 0. So that’s my final answer.
In the actual contest, many people did work all the way down to 0, but 0 was not the right answer. It wasn’t even close to right. The average guess of all the contestants was 18.91, so the winning guess was 13.
I failed because I only looked at the problem from one perspective—the perspective of logic.
the best metaphor for this process is the vision of the dragonfly. Like us, dragonflies have two eyes, but theirs are constructed very differently.
there may be as many as thirty thousand of these lenses on a single eye, each one occupying a physical space slightly different from those of the adjacent lenses, giving it a unique perspective.
Stepping outside ourselves and really getting a different view of reality is a struggle. But foxes are likelier to give it a try. Whether by virtue of temperament or habit or conscious effort, they tend to engage in the hard work of consulting other perspectives.
After invading in 2003, the United States turned Iraq upside down looking for WMDs but found nothing. It was one of the worst—arguably the worst—intelligence failure in modern history. The IC was humiliated.
virtually every major intelligence agency on the planet to suspect, with varying confidence, that Saddam was hiding something—not because they had glimpsed what he was hiding but because Saddam was acting like someone who was hiding something.
Even for historians, putting yourself in the position of someone at the time—and not being swayed by your knowledge of what happened later—is a struggle.
This particular bait and switch—replacing “Was it a good decision?” with “Did it have a good outcome?”—is both popular and pernicious.
Good poker players, investors, and executives all understand this. If they don’t, they can’t remain good at what they do—because they will draw false lessons from experience, making their judgment worse over time.
In 2006 the Intelligence Advanced Research Projects Activity (IARPA) was created. Its mission is to fund cutting-edge research with the potential to make the intelligence community smarter and more effective.
But the intelligence community’s forecasts have never been systematically assessed. What there is instead is accountability for process: Intelligence analysts are told what they are expected to do when researching, thinking, and judging, and then held accountable to those standards.
Thanks to IARPA, we now know a few hundred ordinary people and some simple math can not only compete with professionals supported by a multibillion-dollar apparatus but also beat them.
Across all four years of the tournament, superforecasters looking out three hundred days were more accurate than regular forecasters looking out one hundred days.
IARPA knew this could happen when it bankrolled the tournament, which is why a decision like that is so unusual.
And yet, IARPA did just that: it put the intelligence community’s mission ahead of the interests of the people inside the intelligence community—at least ahead of those insiders who didn’t want to rock the bureaucratic boat.
it’s easy to misinterpret randomness. We don’t have an intuitive feel for it. Randomness is invisible from the tip-of-your-nose perspective. We can only see it if we step outside ourselves.
So regression to the mean is an indispensable tool for testing the role of luck in performance: Mauboussin notes that slow regression is more often seen in activities dominated by skill, while faster regression is more associated with chance.
in years 2 and 3 we saw the opposite of regression to the mean: the superforecasters as a whole, including Doug Lorch, actually increased their lead over all other forecasters.
So we have a mystery. If chance is playing a significant role, why aren’t we observing significant regression of superforecasters as a whole toward the overall mean?
being recognized as “super” and placed on teams of intellectually stimulating colleagues improved their performance enough to erase the regression to the mean we would otherwise have seen.
Luck plays a role and it is only to be expected that the superstars will occasionally have a bad year and produce ordinary results—just as superstar athletes occasionally look less than stellar.
Regular forecasters scored higher on intelligence and knowledge tests than about 70% of the population. Superforecasters did better, placing higher than about 80% of the population.
although superforecasters are well above average, they did not score off-the-charts high and most fall well short of so-called genius territory,
How many piano tuners are there in Chicago?
we can break the question down by asking, “What information would allow me to answer the question?”
What Fermi understood is that by breaking down the question, we can better separate the knowable and the unknowable.
And the net result tends to be a more accurate estimate than whatever number happened to pop out of the black box when we first read the question.
Of course, all this means we have to overcome our deep-rooted fear of looking dumb. Fermi-izing dares us to be wrong.
Fermi was renowned for his estimates. With little or no information at his disposal, he would often do back-of-the-envelope calculations like this to come up with a number that subsequent measurement revealed to be impressively accurate.
IARPA asked tournament forecasters the following question: “Will either the French or Swiss inquiries find elevated levels of polonium in the remains of Yasser Arafat’s body?”
Neither “Israel would never do that!” nor “Of course Israel did it!” is actually responsive to that question. They answer a different question: “Did Israel poison Yasser Arafat?”
Polonium decays quickly. For the answer to be yes, scientists would have to be able to detect polonium on the remains of a man dead for years.
Arafat had many Palestinian enemies. They could have poisoned him.

