More on this book
Community
Kindle Notes & Highlights
Read between
June 26 - June 27, 2022
Statisticians call that the base rate—how common something is within a broader class. Daniel Kahneman has a much more evocative visual term for it. He calls it the “outside view”—in contrast to the “inside view,” which is the specifics of the particular case. A few minutes with Google tells me about 62% of American households own pets. That’s the outside view here. Starting with the outside view means I will start by estimating that there is a 62% chance the Renzettis have a pet. Then I will turn to the inside view—all those details about the Renzettis—and use them to adjust that initial 62%
...more
You may wonder why the outside view should come first. After all, you could dive into the inside view and draw conclusions, then turn to the outside view. Wouldn’t that work as well? Unfortunately, no, it probably wouldn’t. The reason is a basic psychological concept called anchoring. When we make estimates, we tend to start with some number and adjust. The number we start with is called the anchor. It’s important because we typically underadjust, which means a bad anchor can easily produce a bad estimate. And it’s astonishingly easy to settle on a bad anchor. In classic experiments, Daniel
...more
A good exploration of the inside view does not involve wandering around, soaking up any and all information and hoping that insight somehow emerges. It is targeted and purposeful: it is an investigation, not an amble.11
This sounds like detective work because it is—or to be precise, it is detective work as real investigators do it, not the detectives on TV shows. It’s methodical, slow, and demanding. But it works far better than wandering aimlessly in a forest of information.
Coming up with an outside view, an inside view, and a synthesis of the two isn’t the end. It’s a good beginning. Superforecasters constantly look for other views they can synthesize into their own.
That is a very smart move. Researchers have found that merely asking people to assume their initial judgment is wrong, to seriously consider why that might be, and then make another judgment, produces a second estimate which, when combined with the first, improves accuracy almost as much as getting a second estimate from another person.13 The same effect was produced simply by letting several weeks pass before asking people to make a second estimate. This approach, built on the “wisdom of the crowd” concept, has been called “the crowd within.” The billionaire financier George Soros exemplifies
...more
For superforecasters, beliefs are hypotheses to be tested, not treasures to be guarded. It would be facile to reduce superforecasting to a bumper-sticker slogan, but if I had to, that would be it.
But as researchers have shown, people who use “50%” or “fifty-fifty” often do not mean it literally. They mean “I’m not sure” or “it’s uncertain”—or more simply “maybe.”5 Given
Like the fictional Leon Panetta, Obama may have been bothered by the wide variation in estimates. The disagreement made him think they were unreliable. So he retreated to what probability theorists call the ignorance prior, the state of knowledge you are in before you know whether the coin will land heads or tails or, in this case, whether Osama will be in the master bedroom when the Navy SEALs come knocking. And that was a mistake because it meant Obama did not make full use of the information available at the table.7 But unlike that of the fictional Leon Panetta, Obama’s mental dial didn’t
...more
Bowden’s account reminded me of an offhand remark that Amos Tversky made some thirty years ago, when we served on that National Research Council committee charged with preventing nuclear war. In dealing with probabilities, he said, most people only have three settings: “gonna happen,” “not gonna happen,” and “maybe.”
A confident yes or no is satisfying in a way that maybe never is, a fact that helps to explain why the media so often turn to hedgehogs who are sure they know what is coming no matter how bad their forecasting records may be. Of course it’s not always wrong to prefer a confident judgment. All else being equal, our answers to questions like “Does France have more people than Italy?” are likelier to be right when we are confident they are right than when we are not. Confidence and accuracy are positively correlated. But research shows we exaggerate the size of the correlation. For instance,
...more
surprising. “Most people would identify science with certainty,” wrote the mathematician and statistician William Byers. “Certainty, they feel, is a state of affairs with no downside, so the most desirable situation would be one of absolute certainty. Scientific results and theories seem to promise such certainty.”14 In the popular mind, scientists generate facts and chisel them into granite tablets. This collection of facts is what we call “science.” As the work of accumulating facts proceeds, uncertainty is pushed back. The ultimate goal of science is uncertainty’s total eradication. But
...more
An awareness of irreducible uncertainty is the core of probabilistic thinking, but it’s a tricky thing to measure. To do that, we took advantage of a distinction that philosophers have proposed between “epistemic” and “aleatory” uncertainty. Epistemic uncertainty is something you don’t know but is, at least in theory, knowable. If you wanted to predict the workings of a mystery machine, skilled engineers could, in theory, pry it open and figure it out. Mastering mechanisms is a prototypical clocklike forecasting challenge. Aleatory uncertainty is something you not only don’t know; it is
...more
Another nugget of evidence comes from the phrase “fifty-fifty.” To careful probabilistic thinkers, 50% is just one in a huge range of settings, so they are no likelier to use it than 49% or 51%. Forecasters who use a three-setting mental dial are much likelier to use 50% when they are asked to make probabilistic judgments because they use it as a stand-in for maybe. Hence, we should expect frequent users of 50% to be less accurate. And that’s exactly what the tournament data show.20
Most people never attempt to be as precise as Brian, preferring to stick with what they know, which is the two- or three-setting mental model. That is a serious mistake. As the legendary investor Charlie Munger sagely observed, “If you don’t get this elementary, but mildly unnatural, mathematics of elementary probability into your repertoire, then you go through a long life like a one-legged man in an ass-kicking contest.”23
Like oil and water, chance and fate do not mix. And to the extent that we allow our thoughts to move in the direction of fate, we undermine our ability to think probabilistically.
If it’s true that probabilistic thinking is essential to accurate forecasting, and it-was-meant-to-happen thinking undermines probabilistic thinking, we should expect superforecasters to be much less inclined to see things as fated. To test this, we probed their reactions to pro-fate statements like these: Events unfold according to God’s plan. Everything happens for a reason. There are no accidents or coincidences. We also asked them about pro-probability statements like these: Nothing is inevitable. Even major events like World War II or 9/11 could have turned out very differently.
...more
Forecasts aren’t like lottery tickets that you buy and file away until the big draw. They are judgments that are based on available information and that should be updated in light of changing information. If new polls show a candidate has surged into a comfortable lead, you should boost the probability that the candidate will win. If a competitor unexpectedly declares bankruptcy, revise expected sales accordingly. The IARPA tournament was no different. After Bill Flack did all his difficult initial work and concluded there was a 60% chance that polonium would be detected in Yasser Arafat’s
...more
More important, it is a huge mistake to belittle belief updating. It is not about mindlessly adjusting forecasts in response to whatever is on CNN. Good updating requires the same skills used in making the initial forecast and is often just as demanding. It can even be more challenging.
This is an extreme case of what psychologists call “belief perseverance.” People can be astonishingly intransigent—and capable of rationalizing like crazy to avoid acknowledging new information that upsets their settled beliefs. Consider the 1942 argument of General John DeWitt, a strong supporter of the internment of Japanese Americans: “The very fact that no sabotage has taken place to date is a disturbing and confirming indication that such action will be taken.”6—or to put that more bluntly, “The fact that what I expected to happen didn’t happen proves that it will.” Fortunately, such
...more
Social psychologists have long known that getting people to publicly commit to a belief is a great way to freeze it in place, making it resistant to change. The stronger the commitment, the greater the resistance.8
The Yale professor Dan Kahan has done much research showing that our judgments about risks—Does gun control make us safer or put us in danger?—are driven less by a careful weighing of evidence than by our identities, which is why people’s views on gun control often correlate with their views on climate change, even though the two issues have no logical connection to each other. Psycho-logic trumps logic. And when Kahan asks people who feel strongly that gun control increases risk, or diminishes it, to imagine conclusive evidence that shows they are wrong, and then asks if they would change
...more
This suggests that superforecasters may have a surprising advantage: they’re not experts or professionals, so they have little ego invested in each forecast. Except in rare circumstances—when Jean-Pierre Beugoms answers military questions, for example—they aren’t deeply committed to their judgments, which makes it easier to admit when a forecast is offtrack and adjust.
Psychologists call this the dilution effect, and given that stereotypes are themselves a source of bias we might say that diluting them is all to the good. Yes and no. Yes, it is possible to fight fire with fire, and bias with bias, but the dilution effect remains a bias. Remember what’s going on here. People base their estimate on what they think is a useful tidbit of information. Then they encounter clearly irrelevant information—meaningless noise—which they indisputably should ignore. But they don’t. They sway in the wind, at the mercy of the next random gust of irrelevant information.
Many studies have found that those who trade more frequently get worse returns than those who lean toward old-fashioned buy-and-hold strategies.
The tournament data prove it: superforecasters not only update more often than other forecasters, they update in smaller increments.
The one consistent belief of the “consistently inconsistent” John Maynard Keynes was that he could do better. Failure did not mean he had reached the limits of his ability. It meant he had to think hard and give it another go. Try, fail, analyze, adjust, try again: Keynes cycled through those steps ceaselessly.
We learn new skills by doing. We improve those skills by doing more.
To demonstrate the limits of learning from lectures, the great philosopher and teacher Michael Polanyi wrote a detailed explanation of the physics of riding a bicycle: “The rule observed by the cyclist is this. When he starts falling to the right he turns the handlebars to the right, so that the course of the bicycle is deflected along a curve towards the right. This results in a centrifugal force pushing the cyclist to the left and offsets the gravitational force dragging him down to the right.” It continues in that vein and closes: “A simple analysis shows that for a given angle of unbalance
...more
Effective practice also needs to be accompanied by clear and timely feedback. My research collaborator Don Moore points out that police officers spend a lot of time figuring out who is telling the truth and who is lying, but research has found they aren’t nearly as good at it as they think they are and they tend not to get better with experience. That’s because experience isn’t enough. It must be accompanied by clear feedback.
That is essential. To learn from failure, we must know when we fail.
Vague language is elastic language.
Once we know the outcome of something, that knowledge skews our perception of what we thought before we knew the outcome: that’s hindsight bias. Baruch Fischhoff was the first to document the phenomenon in a set of elegant experiments.
Grit is passionate perseverance of long-term goals, even in the face of frustration and failure. Married with a growth mindset, it is a potent force for personal progress.
In his 1972 classic, Victims of Groupthink, the psychologist Irving Janis—one of my PhD advisers at Yale long ago—explored the decision making that went into both the Bay of Pigs invasion and the Cuban missile crisis. Today, everyone has heard of groupthink, although few have read the book that coined the term or know that Janis meant something more precise than the vague catchphrase groupthink has become today. In Janis’s hypothesis, “members of any small cohesive group tend to maintain esprit de corps by unconsciously developing a number of shared illusions and related norms that interfere
...more
Groups can be wise, or mad, or both. What makes the difference isn’t just who is in the group, Kennedy’s circle of advisers demonstrated. The group is its own animal.
Ultimately, we chose to build teams into our research for two reasons. First, in the real world, people seldom make important forecasts without discussing them with others, so getting a better understanding of forecasting in the real world required a better understanding of forecasting in groups. The other reason? Curiosity. We didn’t know the answer and we wanted to, so we took Archie Cochrane’s advice and ran an experiment.
On the other hand, the opposite of groupthink—rancor and dysfunction—is also a danger. Team members must disagree without being disagreeable, we advised. Practice “constructive confrontation,” to use the phrase of Andy Grove, the former CEO of Intel. Precision questioning is one way to do that. Drawing on the work of Dennis Matthies and Monica Worline, we showed them how to tactfully dissect the vague claims people often make.
The results were clear-cut each year. Teams of ordinary forecasters beat the wisdom of the crowd by about 10%. Prediction markets beat ordinary teams by about 20%. And superteams beat prediction markets by 15% to 30%.
parts. How the group thinks collectively is an emergent property of the group itself, a property of communication patterns among group members, not just the thought processes inside each member.
Leaders must decide, and to do that they must make and use forecasts. The more accurate those forecasts are, the better, so the lessons of superforecasting should be of intense interest to them. But leaders must also act and achieve their goals. In a word, they must lead. And anyone who has led people may have doubts about how useful the lessons of superforecasting really are for leaders.
Ask people to list the qualities an effective leader must have, or consult the cottage industry devoted to leadership coaching, or examine rigorous research on the subject, and you will find near-universal agreement on three basic points. Confidence will be on everyone’s list. Leaders must be reasonably confident, and instill confidence in those they lead, because nothing can be accomplished without the belief that it can be. Decisiveness is another essential attribute. Leaders can’t ruminate endlessly. They need to size up the situation, make a decision, and move on. And leaders must deliver
...more
This looks like a serious dilemma. Leaders must be forecasters and leaders but it seems that what is required to succeed at one role may undermine the other.
The Prussian military had long appreciated uncertainty—they had invented board games with dice to introduce the element of chance missing from games like chess—but “everything is uncertain” was for Moltke an axiom whose implications needed to be teased out.
“Clarification of the enemy situation is an obvious necessity, but waiting for information in a tense situation is seldom the sign of strong leadership—more often of weakness,” declared the command manual of the Wehrmacht (the German military) published in 1935 and in force throughout World War II. “The first criterion in war remains decisive action.”5 The Wehrmacht also drew a sharp line between deliberation and implementation: once a decision has been made, the mindset changes. Forget uncertainty and complexity. Act! “If one wishes to attack, then one must do so with resoluteness. Half
...more
Orders in the Wehrmacht were often short and simple—even when history hung in the balance.
Here;s von Moltke's description of how to write such an order.
“The rule to follow is that an order shall contain all, but also only, what subordinates cannot determine for themselves to achieve a particular purpose.”
“Great success requires boldness and daring, but good judgment must take precedence,” the Wehrmacht manual stated. “The command of an army and its subordinate units requires leaders capable of judgment, with clear vision and foresight, and the ability to make independent and decisive decisions and carry them out unwaveringly and positively.”13
As a lifelong intelligence officer, Flynn knew the importance of checking assumptions, no matter how true they feel, but he didn’t because it didn’t feel like an assumption. It felt true. It’s the oldest trick in the psychological book and Flynn fell for it.
just as Kahneman worked with Gary Klein to resolve their disputes about expert intuition, he worked with Barbara Mellers, to explore the capacity of superforecasters to resist a bias of particularly deep relevance to forecasting: scope insensitivity.

