More on this book
Community
Kindle Notes & Highlights
Now comes the hardest-to-grasp part of Taleb’s view of the world. He posits that historical probabilities—all the possible ways the future could unfold—are distributed like wealth, not height. That means our world is vastly more volatile than most of us realize and we are at risk of grave miscalculations.
if you are as mathematically inclined as Taleb, you get used to the idea that the world we live in is but one that emerged, quasi-randomly, from a vast population of once-possible worlds. The past did not have to unfold as it did, the present did not have to be what it is, and the future is wide open.
Immersion in what-if history can give us a visceral feeling for Taleb’s vision of radical indeterminacy.
On the one hand, the hindsight-tainted analyses that dominate commentary after major events are a dead end.
On the other hand, our expectations of the future are derived from our mental models of how the world works, and every event is an opportunity to learn and improve those models.
Tournaments help researchers learn what improves forecasting and help forecasters sharpen their skills with practice and feedback.
Just as we now expect a pill to have been tested in peer-reviewed experiments before we swallow it, we will expect forecasters to establish the accuracy of their forecasting with rigorous testing before we heed their advice.
a Boston doctor named Ernest Amory Codman had an idea similar in spirit to forecaster scorekeeping. He called it the End Result System.
“Codman’s plan disregarded a physician’s clinical reputation or social standing as well as bedside manner or technical skills,” noted the historian Ira Rutkow. “All that counted were the clinical consequences of a doctor’s effort.”
Ernest Codman, Archie Cochrane, and many others overcame entrenched interests. They did it not by storming the ramparts. They did it with reason and a relentless focus on the singular goal of making the sick well.
The same thinking is transforming charitable foundations, which are increasingly making funding contingent on stringent program evaluation.
both athletes and teams have improved dramatically over the last thirty or forty years. In part, that’s because the stakes are bigger. But it also happened because what they are doing has increasingly become evidence based.
“A key part of the ‘performance revolution’ in sports, then, is the story of how organizations, in a systematic way, set about making employees more effective and productive.”
an evidence-based forecasting movement would not be a startling change springing up out of nothing. It would be another manifestation of a broad and deep shift away from decision making based on experience, intuition, and authority—“
Far too many people treat numbers like sacred totems offering divine insight. The truly numerate know that numbers are tools, nothing more, and their quality can range from wretched to superb.
What matters is the big question, but the big question can’t be scored. The little question doesn’t matter but it can be scored, so the IARPA tournament went with it.
I call this Bayesian question clustering because of its family resemblance to the Bayesian updating
In future research, I want to develop the concept and see how effectively we can answer unscorable “big questions” with clusters of little ones.
one way to identify a good question is what I call the smack-the-forehead test: when you read the question after time has passed, you smack your forehead and say, “If only I had thought of that before!”
While we may assume that a superforecaster would also be a superquestioner, and vice versa, we don’t actually know that. Indeed, my best scientific guess is that they often are not.
superb question generation often seems to accompany a hedgehog-like incisiveness and confidence that one has a Big Idea grasp of the deep drivers of an event.
Superforecasters and superquestioners need to acknowledge each other’s complementary strengths, not dwell on each other’s alleged weaknesses.
A major point of view rarely has zero merit, and if a forecasting contest produces a split decision we will have learned that the reality is more mixed than either side thought. If learning, not gloating, is the goal, that is progress.
The guidelines sketched here distill key themes in this book and in training systems that have been experimentally demonstrated to boost accuracy in real-world forecasting contests.
(1) Triage. Focus on questions where your hard work is likely to pay off.
(2) Break seemingly intractable problems into tractable sub-problems.
Decompose the problem into its knowable and unknowable parts. Flush ignorance into the open. Expose and examine your assumptions. Dare to be wrong by making your best guesses. Better to discover errors quickly than to hide them behind vague verbiage.
The surprise is how often remarkably good probability estimates arise from a remarkably crude series of assumptions and guesstimates.
(3) Strike the right balance between inside and outside views.
Superforecasters are in the habit of posing the outside-view question: How often do things of this sort happen in situations of this sort?
Summers’s strategy: he doubled the employee’s estimate, then moved to the next higher time unit. “So, if the research assistant says the task will take an hour, it will take two days. If he says two days, it will take four weeks.”
(4) Strike the right balance between under- and overreacting to evidence.
Skillful updating requires teasing subtle signals from noisy news flows—all the while resisting the lure of wishful thinking.
Yet superforecasters also know how to jump, to move their probability estimates fast in response to diagnostic signals.
(5) Look for the clashing causal forces at work in each problem. For every good policy argument, there is typically a counterargument that is at least worth acknowledging.
In classical dialectics, thesis meets antithesis, producing synthesis.
Synthesis is an art that requires reconciling irreducibly subjective judgments.
(6) Strive to distinguish as many degrees of doubt as the problem permits but no more.
(7) Strike the right balance between under- and overconfidence, between prudence and decisiveness. Superforecasters understand the risks both of rushing to judgment and of dawdling too long near “maybe.”
(8) Look for the errors behind your mistakes but beware of rearview-mirror hindsight biases.
Conduct unflinching postmortems: Where exactly did I go wrong?
Also don’t forget to do postmortems on your successes too. Not all successes imply that your reasoning was right. You may have just lucked out by making offsetting errors.
(9) Bring out the best in others and let others bring out the best in you.
(10) Master the error-balancing bicycle. Implementing each commandment requires balancing opposing errors.
(11) Don’t treat commandments as commandments. “It is impossible to lay down binding rules,” Helmuth von Moltke warned, “because two cases will never be exactly the same.”
Brier scoring imposes reputational penalties for overconfidence that are tied to the financial penalties that gamblers would incur from the same errors. If you aren’t willing to bet on the odds implied by your probability estimate, rethink your estimate.
Psychologists call this the fundamental attribution error. We are fully aware that situational factors—like insomnia—can influence our own behavior, and we rightly attribute our behavior to those factors, but we routinely don’t make the same allowance for others and instead assume that their behavior reflects who they are.
I doubt it is because superforecasters are smarter or more open-minded. I suspect they did better because they treat forecasting as a cultivatable skill whereas analysts work inside an organization that treats prediction as a sideshow, not part of the analyst’s real job.
It is amazing how many arbitrary assumptions underlie pretty darn good forecasts. Our choice is not whether to engage in crude guesswork; it is whether to do it overtly or covertly.
This aversion to uncertainty underlies the Ellsberg paradox,

