More on this book
Community
Kindle Notes & Highlights
Read between
November 27 - December 2, 2023
He was aiming for a PhD in math, but he realized it was beyond his abilities—“I had my nose rubbed in my limitations” is how he puts it—and he dropped out.
Every day, the news media deliver forecasts without reporting, or even asking, how good the forecasters who made the forecasts really are. Every day, corporations and governments pay for forecasts that may be prescient or worthless or something in between. And every day, all of us—leaders of nations, corporate executives, investors, and voters—make critical decisions on the basis of forecasts whose quality is unknown. Baseball managers wouldn’t dream of getting out the checkbook to hire a player without consulting performance statistics. Even fans expect to see player stats on scoreboards and
...more
1972 the American meteorologist Edward Lorenz wrote a paper with an arresting title: “Predictability: Does the Flap of a Butterfly’s Wings in Brazil Set Off a Tornado in Texas?” A decade earlier, Lorenz had discovered by accident that tiny data entry variations in computer simulations of weather patterns—like replacing 0.506127 with 0.506—could produce dramatically different long-term forecasts. It was an insight that would inspire “chaos theory”: in nonlinear systems like the atmosphere, even small changes in initial conditions can mushroom to enormous proportions. So, in principle, a lone
...more
Edward Lorenz shifted scientific opinion toward the view that there are hard limits on predictability, a deeply philosophical question.4 For centuries, scientists had supposed that growing knowledge must lead to greater predictability because reality was like a clock—an awesomely big and complicated clock but still a clock—and the more scientists learned about its innards, how the gears grind together, how the weights and springs function, the better they could capture its operations with deterministic equations and predict what it would do. In 1814 the French mathematician and astronomer
...more
In one of history’s great ironies, scientists today know vastly more than their colleagues a century ago, and possess vastly more data-crunching power, but they are much less confident in the prospects for perfect predictability.
“I have been struck by how important measurement is to improving the human condition,” Bill Gates wrote. “You can achieve incredible progress if you set a clear goal and find a measure that will drive progress toward that goal….This may seem basic, but it is amazing how often it is not done and how hard it is to get right.”8 He is right about what it takes to drive progress, and it is surprising how rarely it’s done in forecasting. Even that simple first step—setting a clear goal—hasn’t been taken.
Foresight isn’t a mysterious gift bestowed at birth. It is the product of particular ways of thinking, of gathering information, of updating beliefs. These habits of thought can be learned and cultivated by any intelligent, thoughtful, determined person.
broadly speaking, superforecasting demands thinking that is open-minded, careful, curious, and—above all—self-critical. It also demands focus. The kind of thinking that produces superior judgment does not come effortlessly. Only the determined can deliver it reasonably consistently, which is why our analyses have consistently found commitment to self-improvement to be the strongest predictor of performance.
it is of paramount importance, in order to make progress, that we recognize this ignorance and this doubt. Because we have the doubt, we then propose looking in new directions for new ideas. The rate of the development of science is not the rate at which you make observations alone but, much more important, the rate at which you create new things to test.11
This compulsion to explain arises with clocklike regularity every time a stock market closes and a journalist says something like “The Dow rose ninety-five points today on news that…” A quick check will often reveal that the news that supposedly drove the market came out well after the market had risen. But that minimal level of scrutiny is seldom applied.
between System 1 and System 2 can be subtle and creative. But scientists are trained to be cautious. They know that no matter how tempting it is to anoint a pet hypothesis as The Truth, alternative explanations must get a hearing. And they must seriously consider the possibility that their initial hunch is wrong.
“It is wise to take admissions of uncertainty seriously,” Daniel Kahneman noted, “but declarations of high confidence mainly tell you that an individual has constructed a coherent story in his mind, not necessarily that the story is true.”
Formally, it’s called attribute substitution, but I call it bait and switch: when faced with a hard question, we often surreptitiously replace it with an easy one. “Should I worry about the shadow in the long grass?” is a hard question. Without more data, it may be unanswerable. So we substitute an easier question: “Can I easily recall a lion attacking someone from the long grass?”
something doesn’t fit a pattern—like a kitchen fire giving off more heat than a kitchen fire should—a competent expert senses it immediately. But as we see every time someone spots the Virgin Mary in burnt toast or in mold on a church wall, our pattern-recognition ability comes at the cost of susceptibility to false positives. This, plus the many other ways in which the tip-of-your-nose perspective can generate perceptions that are clear, compelling, and wrong, means intuition can fail as spectacularly as it can work.
Consider that if an intelligence agency says there is a 65% chance that an event will happen, it risks being pilloried if it does not—and because the forecast itself says there is a 35% chance it will not happen, that’s a big risk. So what’s the safe thing to do? Stick with elastic language. Forecasters who use “a fair chance” and “a serious possibility” can even make the wrong-side-of-maybe fallacy work for them: If the event happens, “a fair chance” can retroactively be stretched to mean something considerably bigger than 50%—so the forecaster nailed it. If it doesn’t happen, it can be
...more
consistent with the EPJ data, which revealed an inverse correlation between fame and accuracy: the more famous an expert was, the less accurate he was. That’s not because editors, producers, and the public go looking for bad forecasters. They go looking for hedgehogs, who just happen to be bad forecasters.
Foxes don’t fare so well in the media. They’re less confident, less likely to say something is “certain” or “impossible,” and are likelier to settle on shades of “maybe.” And their stories are complex, full of “howevers” and “on the other hands,” because they look at problems one way, then another, and another. This
Some reverently call it the miracle of aggregation but it is easy to demystify. The key is recognizing that useful information is often dispersed widely, with one person possessing a scrap, another holding a more important piece, a third having a few bits, and so on.
Now look at how foxes approach forecasting. They deploy not one analytical idea but many and seek out information not from one source but many. Then they synthesize it all into a single conclusion. In a word, they aggregate. They may be individuals working alone, but what they do is, in principle, no different from what Galton’s crowd did. They integrate perspectives and the information contained within them.
Michael Mauboussin, a global financial strategist, in his book The Success Equation. But as Mauboussin noted, there is an elegant rule of thumb that applies to athletes and CEOs, stock analysts and superforecasters. It involves “regression to the mean.”
What Fermi understood is that by breaking down the question, we can better separate the knowable and the unknowable. So guessing—pulling a number out of the black box—isn’t eliminated. But we have brought our guessing process out into the light of day where we can inspect it. And the net result tends to be a more accurate estimate than whatever number happened to pop out of the black box when we first read the question. Of course, all this means we have to overcome our deep-rooted fear of looking dumb. Fermi-izing dares us to be wrong.
But superforecasters wouldn’t bother with any of that, at least not at first. The first thing they would do is find out what percentage of American households own a pet. Statisticians call that the base rate—how common something is within a broader class. Daniel Kahneman has a much more evocative visual term for it. He calls it the “outside view”—in contrast to the “inside view,” which is the specifics of the particular case. A few minutes with Google tells me about 62% of American households own pets. That’s the outside view here. Starting with the outside view means I will start by
...more
When we make estimates, we tend to start with some number and adjust. The number we start with is called the anchor. It’s important because we typically underadjust, which means a bad anchor can easily produce a bad estimate. And it’s astonishingly easy to settle on a bad anchor. In classic experiments, Daniel Kahneman and Amos Tversky showed you could influence people’s judgment merely by exposing them to a number—any number, even one that is obviously meaningless, like one randomly selected by the spin of a wheel.10 So a forecaster who starts by diving into the inside view risks being swayed
...more
For superforecasters, beliefs are hypotheses to be tested, not treasures to be guarded. It would be facile to reduce superforecasting to a bumper-sticker slogan, but if I had to, that would be it.
Thanks in part to their superior numeracy, superforecasters, like scientists and mathematicians, tend to be probabilistic thinkers. An awareness of irreducible uncertainty is the core of probabilistic thinking, but it’s a tricky thing to measure. To do that, we took advantage of a distinction that philosophers have proposed between “epistemic” and “aleatory” uncertainty. Epistemic uncertainty is something you don’t know but is, at least in theory, knowable. If you wanted to predict the workings of a mystery machine, skilled engineers could, in theory, pry it open and figure it out. Mastering
...more
This highlight has been truncated due to consecutive passage length restrictions.
So finding meaning in events is positively correlated with wellbeing but negatively correlated with foresight. That sets up a depressing possibility: Is misery the price of accuracy? I don’t know. But this book is not about how to be happy. It’s about how to be accurate, and the superforecasters show that probabilistic thinking is essential for that. I’ll leave the existential issues to others.
An updated forecast is likely to be a better-informed forecast and therefore a more accurate forecast. “When the facts change, I change my mind,” the legendary British economist John Maynard Keynes declared. “What do you do, sir?” The superforecasters do likewise, and that is another big reason why they are super.
signal feels strong and clear—and our judgment reflects that. But add irrelevant information and we can’t help but see Robert or David more as a person than a stereotype, which weakens the fit.10 Psychologists call this the dilution effect, and given that stereotypes are themselves a source of bias we might say that diluting them is all to the good.
He knows Bayes’ theorem but he didn’t use it even once to make his hundreds of updated forecasts. And yet Minto appreciates the Bayesian spirit. “I think it is likely that I have a better intuitive grasp of Bayes’ theorem than most people,” he said, “even though if you asked me to write it down from memory I’d probably fail.” Minto is a Bayesian who does not use Bayes’ theorem. That paradoxical description applies to most superforecasters.
Simpson has a “growth mindset,” which Dweck defines as believing that your abilities are largely the product of effort—that you can “grow” to the extent that you are willing to work hard and learn.2 Some people might think that’s so obviously true it scarcely needs to be said. But as Dweck’s research has shown, the growth mindset is far from universal. Many people have what she calls a “fixed mindset”—the belief that we are who we are, and abilities can only be revealed, not created and developed.
one of many experiments Dweck devised to reveal the crippling power of the fixed mindset, she gave relatively easy puzzles to fifth graders. They enjoyed them. She then gave the children harder puzzles. Some of the children suddenly lost interest and declined an offer to take the puzzles home. Others loved the harder puzzles even more than the easy ones. “Could you write down the name of these puzzles,” one child asked, “so my mom can buy me some more when these ones run out?” The difference between the two groups of children was not “puzzle-solving talent.” Even among equally adept children,
...more
Buffett’s fortune. The one consistent belief of the “consistently inconsistent” John Maynard Keynes was that he could do better. Failure did not mean he had reached the limits of his ability. It meant he had to think hard and give it another go. Try, fail, analyze, adjust, try again: Keynes cycled through those steps ceaselessly. Keynes operated on a higher plane than most of us, but that process—try, fail, analyze, adjust, try again—is fundamental to how all of us learn, almost from the moment we are born. Look at a baby learning to sit up.
expert who thought there was only a 10% chance might remember herself thinking there was a 40% or 50% chance. There was even a case in which an expert who pegged the probability at 20% recalled it as 70%—which illustrates why hindsight bias is sometimes known as the “I knew it all along” effect. Forecasters who use ambiguous language and rely on flawed memories to retrieve old forecasts don’t get clear feedback, which makes it impossible to learn from experience.
When the what-iffery implied that their failed forecast would have turned out right—for example, if the coup against Gorbachev in 1991 had been better planned and the plotters had been less drunk and better organized, the Communist Party would still be in power—the experts tended to welcome the what-if tale like an old friend. But when the scenarios implied that their correct forecast could easily have turned out wrong, they dismissed it as speculative. So experts were open to “I was almost right” scenarios but rejected “I was almost wrong” alternatives.
Grit is passionate perseverance of long-term goals, even in the face of frustration and failure. Married with a growth mindset, it is a potent force for personal progress.
composite portrait of the modal superforecaster. In philosophic outlook, they tend to be: CAUTIOUS: Nothing is certain HUMBLE: Reality is infinitely complex NONDETERMINISTIC: What happens is not meant to be and does not have to happen In their abilities and thinking styles, they tend to be: ACTIVELY OPEN-MINDED: Beliefs are hypotheses to be tested, not treasures to be protected INTELLIGENT AND KNOWLEDGEABLE, WITH A “NEED FOR COGNITION”: Intellectually curious, enjoy puzzles and mental challenges REFLECTIVE: Introspective and self-critical NUMERATE: Comfortable with numbers In their methods of
...more
Janis’s hypothesis, “members of any small cohesive group tend to maintain esprit de corps by unconsciously developing a number of shared illusions and related norms that interfere with critical thinking and reality testing.”3 Groups that get along too well don’t question assumptions or confront uncomfortable facts. So everyone agrees, which is pleasant, and the fact that everyone agrees is tacitly taken to be proof the group is on the right track.
How the Kennedy White House changed its decision-making culture for the better is a must-read for students of management and public policy because it captures the dual-edged nature of working in groups. Teams can cause terrible mistakes. They can also sharpen judgment and accomplish together what cannot be done alone. Managers tend to focus on the negative or the positive but they need to see both. As mentioned earlier, the term “wisdom of crowds” comes from
forecasters can become too friendly, letting groupthink set in. These two tendencies can reinforce each other. We all agree, so our work is done, right? And unanimity within a group is a powerful force. If that agreement is ill-founded, the group slips into self-righteous complacency.
groupthink is a danger. Be cooperative but not deferential. Consensus is not always good; disagreement not always bad. If you do happen to agree, don’t take that agreement—in itself—as proof that you are right. Never stop doubting. Pointed questions are as essential to a team as vitamins are to a human body. On the other hand, the opposite of groupthink—rancor and dysfunction—is also a danger. Team members must disagree without being disagreeable, we advised. Practice “constructive confrontation,” to use the phrase of Andy Grove, the former CEO of Intel. Precision questioning is one way to do
...more
group of open-minded people who don’t care about one another will be less than the sum of its open-minded parts. A group of opinionated people who engage one another in pursuit of the truth will be more than the sum of its opinionated parts.
Grant’s research shows that the pro-social example of the giver can improve the behavior of others, which helps everyone, including the giver—which explains why Grant has found that givers tend to come out on top.
Combining uniform perspectives only produces more of the same, while slight variation will produce slight improvement. It is the diversity of the perspectives that makes the magic work. Superteams were fairly diverse—because superforecasters are fairly diverse—but we didn’t design them with that in mind. We put ability first. If Page is right, we might have gotten even better results if we had made diversity the key determinant of team membership and let ability take care of itself. Again, though, flag the false dichotomy. The choice is not ability or diversity; it is fine-tuning the mixes of
...more
to leadership coaching, or examine rigorous research on the subject, and you will find near-universal agreement on three basic points. Confidence will be on everyone’s list. Leaders must be reasonably confident, and instill confidence in those they lead, because nothing can be accomplished without the belief that it can be. Decisiveness is another essential attribute. Leaders can’t ruminate endlessly. They need to size up the situation, make a decision, and move on. And leaders must deliver a vision—the goal that everyone strives together to achieve.
“No plan of operations extends with certainty beyond the first encounter with the enemy’s main strength,” he wrote. That statement was refined and repeated over the decades and today soldiers know it as “no plan survives contact with the enemy.” That’s much snappier. But notice that Moltke’s original was more nuanced, which is typical of his thinking. “It is impossible to lay down binding rules” that apply in all circumstances, he wrote. In war, “two cases never will be exactly the same.” Improvisation is essential.
The Wehrmacht also drew a sharp line between deliberation and implementation: once a decision has been made, the mindset changes. Forget uncertainty and complexity. Act! “If one wishes to attack, then one must do so with resoluteness. Half measures are out of place,” Moltke wrote. Officers must conduct themselves with “calm and assurance” to “earn the trust of the soldier.” There is no place for doubt. “Only strength and confidence carry the units with them and produce success.” The wise officer knows the battlefield is shrouded in a “fog of uncertainty” but “at least one thing must be
...more
Petraeus sees the divide between doers and thinkers as a false dichotomy. Leaders must be both. “The bold move is the right move except when it’s the wrong move,” he says. A leader “needs to figure out what’s the right move and then execute it boldly.”23 That’s the tension between deliberation and implementation that Moltke emphasized and Petraeus balanced in Iraq.
“You have to have tremendous humility in the face of the game because the game is extremely complex, you won’t solve it, it’s not like tic-tac-toe or checkers,” she says. “It’s very hard to master and if you’re not learning all the time, you will fail. That being said, humility in the face of the game is extremely different than humility in the face of your opponents.” Duke feels confident that she can compete with most people she sits down with at a poker table. “But that doesn’t mean I think I’ve mastered this game.”
Coping with dissonance is hard. “The test of a first-rate intelligence is the ability to hold two opposed ideas in mind at the same time and still retain the ability to function,” F. Scott Fitzgerald observed in “The Crack-Up.” It requires teasing apart our feelings about the Nazi regime from our factual judgments about the Wehrmacht’s organizational resilience—and to see the Wehrmacht as both a horrific organization that deserved to be destroyed and an effective organization with lessons to teach us. There is no logical contradiction, just a psycho-logical tension. If you want to become a
...more
Not even knowing it’s an illusion can switch off the illusion. The cognitive illusions that the tip-of-your-nose perspective sometimes generates are similarly impossible to stop. We can’t switch off the tip-of-our-nose perspective. We can only monitor the answers that bubble up into consciousness—and, when we have the time and cognitive capacity, use a ruler to check.