More on this book
Community
Kindle Notes & Highlights
Read between
January 1 - January 20, 2019
The one undeniable talent that talking heads have is their skill at telling a compelling story with conviction, and that is enough.
“chaos theory”: in nonlinear systems like the atmosphere, even small changes in initial conditions can mushroom to enormous proportions.
How predictable something is depends on what we are trying to predict, how far into the future, and under what circumstances.
Foresight isn’t a mysterious gift bestowed at birth. It is the product of particular ways of thinking, of gathering information, of updating beliefs.
superforecasting demands thinking that is open-minded, careful, curious, and—above all—self-critical. It also demands focus. The kind of thinking that produces superior judgment does not come effortlessly. Only the determined can deliver it reasonably consistently, which is why our analyses have consistently found commitment to self-improvement to be the strongest predictor of performance.
A defining feature of intuitive judgment is its insensitivity to the quality of the evidence on which the judgment is based.
“It is wise to take admissions of uncertainty seriously,” Daniel Kahneman noted, “but declarations of high confidence mainly tell you that an individual has constructed a coherent story in his mind, not necessarily that the story is true.”
At lunch one day in 1988, my then–Berkeley colleague Daniel Kahneman tossed out a testable idea that proved prescient. He speculated that intelligence and knowledge would improve forecasting but the benefits would taper off fast. People armed with PhDs and decades of experience may be only a tad more accurate than attentive readers of the New York Times.
Brier scores measure the distance between what you forecast and what actually happened. So Brier scores are like golf scores: lower is better. Perfection is 0. A hedged fifty-fifty call, or random guessing in the aggregate, will produce a Brier score of 0.5. A forecast that is wrong to the greatest possible extent—saying there is a 100% chance that something will happen and it doesn’t, every time—scores a disastrous 2.0, as far from The Truth as it is possible to get.
he drew on a scrap of 2,500-year-old Greek poetry attributed to the warrior-poet Archilochus: “The fox knows many things but the hedgehog knows one big thing.”
Now look at how foxes approach forecasting. They deploy not one analytical idea but many and seek out information not from one source but many. Then they synthesize it all into a single conclusion. In a word, they aggregate. They may be individuals working alone, but what they do is, in principle, no different from what Galton’s crowd did. They integrate perspectives and the information contained within them. The only real difference is that the process occurs within one skull.
“All models are wrong,” the statistician George Box observed, “but some are useful.”
The psychologist Ellen Langer has shown how poorly we grasp randomness in a series of experiments. In one, she asked Yale students to watch someone flip a coin thirty times and predict whether it would come up heads or tails. The students could not see the actual flipping but they were told the results of each toss. The results, however, were rigged: all students got a total of fifteen right and fifteen wrong, but some students got a string of hits early while others started with a string of misses. Langer then asked the students how well they thought they would do if the experiment were
...more
Knowledge is something we can all increase, but only slowly. People who haven’t stayed mentally active have little hope of catching up to lifelong learners. Intelligence feels like an even more daunting obstacle.
The same effect was produced simply by letting several weeks pass before asking people to make a second estimate. This approach, built on the “wisdom of the crowd” concept, has been called “the crowd within.”
“Need for cognition” is the psychological term for the tendency to engage in and enjoy hard mental slogs.
For superforecasters, beliefs are hypotheses to be tested, not treasures to be guarded.
The disagreement made him think they were unreliable. So he retreated to what probability theorists call the ignorance prior, the state of knowledge you are in before you know whether the coin will land heads or tails or, in this case, whether Osama will be in the master bedroom when the Navy SEALs come knocking.
Bowden’s account reminded me of an offhand remark that Amos Tversky made some thirty years ago, when we served on that National Research Council committee charged with preventing nuclear war. In dealing with probabilities, he said, most people only have three settings: “gonna happen,” “not gonna happen,” and “maybe.” Amos had an impish sense of humor. He also appreciated the absurdity of an academic committee on a mission to save the world. So I am 98% sure he was joking. And 99% sure his joke captures a basic truth about human judgment.
Confidence and accuracy are positively correlated. But research shows we exaggerate the size of the correlation.
This sort of primal thinking goes a long way to explaining why so many people have a poor grasp of probability. Some of it can be chalked up to simple ignorance and misunderstanding—like people who think that “a 70% chance of rain in Los Angeles” means “it will rain 70% of the day but not the other 30%” or “it will rain in 70% of Los Angeles but not the other 30%” or “70% of forecasters think it will rain but 30% don’t.”
One of twentieth-century science’s great accomplishments has been to show that uncertainty is an ineradicable element of reality.
All scientific knowledge is tentative.
Epistemic uncertainty is something you don’t know but is, at least in theory, knowable. If you wanted to predict the workings of a mystery machine, skilled engineers could, in theory, pry it open and figure it out.
Aleatory uncertainty is something you not only don’t know; it is unknowable. No matter how much you want to know whether it will rain in Philadelphia one year from now, no matter how many great meteorologists you consult, you can’t outguess the seasonal averages.
Science doesn’t tackle “why” questions about the purpose of life. It sticks to “how” questions that focus on causation and probabilities. Snow building up on the side of a mountain may slip and start an avalanche, or it may not. Until it happens, or it doesn’t, it could go either way. It is not predetermined by God or fate or anything else. It is not “meant to be.” It has no meaning. “Maybe” suggests that, contra Einstein, God does play dice with the cosmos. Thus, probabilistic thinking and divine-order thinking are in tension.
This suggests that superforecasters may have a surprising advantage: they’re not experts or professionals, so they have little ego invested in each forecast.
We learn new skills by doing. We improve those skills by doing more. These fundamental facts are true of even the most demanding skills. Modern fighter jets are enormously complex flying computers but classroom instruction isn’t enough to produce a qualified pilot. Not even time in advanced flight simulators will do. Pilots need hours in the air, the more the better. The same is true of surgeons, bankers, and business executives.
The knowledge required to ride a bicycle can’t be fully captured in words and conveyed to others. We need “tacit knowledge,” the sort we only get from bruising experience. To learn to ride a bicycle, we must try to ride one. It goes badly at first.
Effective practice also needs to be accompanied by clear and timely feedback. My research collaborator Don Moore points out that police officers spend a lot of time figuring out who is telling the truth and who is lying, but research has found they aren’t nearly as good at it as they think they are and they tend not to get better with experience. That’s because experience isn’t enough. It must be accompanied by clear feedback.
Grit is passionate perseverance of long-term goals, even in the face of frustration and failure. Married with a growth mindset, it is a potent force for personal progress.
Computer programmers have a wonderful term for a program that is not intended to be released in a final version but will instead be used, analyzed, and improved without end. It is “perpetual beta.”
In philosophic outlook, they tend to be: CAUTIOUS: Nothing is certain HUMBLE: Reality is infinitely complex NONDETERMINISTIC: What happens is not meant to be and does not have to happen
In their abilities and thinking styles, they tend to be: ACTIVELY OPEN-MINDED: Beliefs are hypotheses to be tested, not treasures to be protected INTELLIGENT AND KNOWLEDGEABLE, WITH A “NEED FOR COGNITION”: Intellectually curious, enjoy puzzles and mental challenges REFLECTIVE: Introspective and self-critical NUMERATE: Comfortable with numbers
In their methods of forecasting they tend to be: PRAGMATIC: Not wedded to any idea or agenda ANALYTICAL: Capable of stepping back from the tip-of-your-nose perspective and considering other views DRAGONFLY-EYED: Value diverse views and synthesize them into their own PROBABILISTIC: Judge using many grades of maybe THOUGHTFUL UPDATERS: When facts change, they change their minds GOOD...
This highlight has been truncated due to consecutive passage length restrictions.
In their work ethic, they tend to have: A GROWTH MINDSET: Believe it’s possible to get better GRIT: Determined to...
This highlight has been truncated due to consecutive passage length restrictions.
Teams can cause terrible mistakes. They can also sharpen judgment and accomplish together what cannot be done alone.
Managers tend to focus on the negative or the positive but they need to see both.
Ask people to list the qualities an effective leader must have, or consult the cottage industry devoted to leadership coaching, or examine rigorous research on the subject, and you will find near-universal agreement on three basic points. Confidence will be on everyone’s list. Leaders must be reasonably confident, and instill confidence in those they lead, because nothing can be accomplished without the belief that it can be. Decisiveness is another essential attribute. Leaders can’t ruminate endlessly. They need to size up the situation, make a decision, and move on. And leaders must deliver
...more
The fundamental message: think. If necessary, discuss your orders. Even criticize them. And if you absolutely must—and you better have a good reason—disobey them.
Auftragstaktik. Usually translated today as “mission command,” the basic idea is simple. “War cannot be conducted from the green table,” Moltke wrote, using an expression that referred to top commanders at headquarters. “Frequent and rapid decisions can be shaped only on the spot according to estimates of local conditions.”
Auftragstaktik blended strategic coherence and decentralized decision making with a simple principle: commanders were to tell subordinates what their goal is but not how to achieve it.
They score higher than average on measures of intelligence and open-mindedness, although they are not off the charts. What makes them so good is less what they are than what they do—the hard work of research, the careful thought and self-criticism, the gathering and synthesizing of other perspectives, the granular judgments and relentless updating.