Superforecasting Quotes

Rate this book
Clear rating
Superforecasting: The Art and Science of Prediction Superforecasting: The Art and Science of Prediction by Philip E. Tetlock
21,865 ratings, 4.08 average rating, 1,675 reviews
Superforecasting Quotes Showing 151-180 of 233
“That is normal human behavior. We tend to go with strong hunches. System 1 follows a primitive psycho-logic: if it feels true, it is. In the Paleolithic world in which our brains evolved, that’s not a bad way of making decisions. Gathering”
Philip E. Tetlock, Superforecasting: The Art and Science of Prediction
“What people didn’t grasp is that the only alternative to a controlled experiment that delivers real insight is an uncontrolled experiment that produces merely the illusion of insight. Cochrane”
Philip E. Tetlock, Superforecasting: The Art and Science of Prediction
“Physicians and the institutions they controlled didn’t want to let go of the idea that their judgment alone revealed the truth, so they kept doing what they did because they had always done it that way—and they were backed up by respected authority. They didn’t need scientific validation. They just knew. Cochrane despised this attitude. He called it “the God complex.”
Philip E. Tetlock, Superforecasting: The Art and Science of Prediction
“The idea of randomized controlled trials was painfully slow to catch on and it was only after World War II that the first serious trials were attempted. They”
Philip E. Tetlock, Superforecasting: The Art and Science of Prediction
“The rate of the development of science is not the rate at which you make observations alone but, much more important, the rate at which you create new things to test.11”
Philip E. Tetlock, Superforecasting: The Art and Science of Prediction
“Not until the twentieth century did the idea of randomized trial experiments, careful measurement, and statistical power take hold. “Is the application of the numerical method to the subject-matter of medicine a trivial and time-wasting ingenuity as some hold, or is it an important stage in the development of our art, as others proclaim it,” the Lancet asked in 1921. The”
Philip E. Tetlock, Superforecasting: The Art and Science of Prediction
“All who drink of this treatment recover in a short time, except those whom it does not help, who all die,” he wrote. “It is obvious, therefore, that it fails only in incurable cases.”5”
Philip E. Tetlock, Superforecasting: The Art and Science of Prediction
“Galen’s writings were the indisputable source of medical authority for more than a thousand years. “It is I, and I alone, who has revealed the true path of medicine,” Galen wrote with his usual modesty. And yet Galen never conducted anything resembling a modern experiment. Why”
Philip E. Tetlock, Superforecasting: The Art and Science of Prediction
“A defining feature of intuitive judgment is its insensitivity to the quality of the evidence on which the judgment is based. It has to be that way. System 1 can only do its job of delivering strong conclusions at lightning speed if it never pauses to wonder whether the evidence at hand is flawed or inadequate, or if there is better evidence elsewhere. It must treat the available evidence as reliable and sufficient. These tacit assumptions are so vital to System 1 that Kahneman gave them an ungainly but oddly memorable label: WYSIATI (What You See Is All There Is).14”
Philip Tetlock, Superforecasting: The Art and Science of Prediction
“You see the shadow. Snap! You are frightened—and running. That’s the “availability heuristic,” one of many System 1 operations—or heuristics—discovered by Daniel Kahneman, his collaborator Amos Tversky, and other researchers in the fast-growing science of judgment and choice.”
Philip Tetlock, Superforecasting: The Art and Science of Prediction
“Snap judgments are sometimes essential. As Daniel Kahneman puts it, “System 1 is designed to jump to conclusions from little evidence.”13”
Philip Tetlock, Superforecasting: The Art and Science of Prediction
“Machines may get better at “mimicking human meaning,” and thereby better at predicting human behavior, but “there’s a difference between mimicking and reflecting meaning and originating meaning,” Ferrucci said. That’s a space human judgment will always occupy.”
Philip Tetlock, Superforecasting: The Art and Science of Prediction
“They are all men (always men) of strong conviction and profound trust in their own judgement.”
Philip Tetlock, Superforecasting: The Art and Science of Prediction
“The point is now indisputable: when you have a well-validated statistical algorithm, use it.”
Philip Tetlock, Superforecasting: The Art and Science of Prediction
“in most cases statistical algorithms beat subjective judgment, and in the handful of studies where they don’t, they usually tie. Given”
Philip Tetlock, Superforecasting: The Art and Science of Prediction
“Superforecasting does require minimum levels of intelligence, numeracy, and knowledge of the world, but anyone who reads serious books about psychological research probably has those prerequisites. So”
Philip Tetlock, Superforecasting: The Art and Science of Prediction
“In the simplest version of the problem, there are two urns. Inside the first urn are 50 white marbles and 50 black marbles. Inside the second urn is a mix of white and black marbles in an unknown proportion. There may be 99 white marbles and 1 black marble, 98 white marbles and 2 black marbles, and so on, all the way to a possible mix of 1 white marble and 99 black marbles. Now, you get to draw a marble from one of the urns. If you draw a black marble, you win cash. So which urn do you choose? It doesn’t take a lot of thought to figure out that the odds of drawing a black marble are the same from either urn but, as Ellsberg showed, people strongly prefer the first urn. What makes the difference is uncertainty. With both urns, it is uncertain whether you will draw a black or white marble, but with the first urn, unlike the second, there is no uncertainty about the contents, which is enough to make it by far the preferred choice. Our aversion to uncertainty can even make people prefer the certainty of a bad thing to the mere possibility of one. Researchers have shown, for example, that people given a colostomy they knew was permanent were happier six months later than those who had a colostomy that may or may not be permanent. See”
Philip Tetlock, Superforecasting: The Art and Science of Prediction
“Imagine you suffer from insomnia and haven’t slept properly in days and you lose your temper and shout at a colleague. Then you apologize. What does this incident say about you? It says you need your sleep. Beyond that, it says nothing. But imagine you see someone who snaps, shouts, then apologizes and explains that he has insomnia and hasn’t slept properly in days. What does that incident say about that person? Logically, it should say about him what it said about you, but decades of research suggest that’s not the lesson you will draw. You will think this person is a jerk. Psychologists call this the fundamental attribution error. We are fully aware that situational factors—like insomnia—can influence our own behavior, and we rightly attribute our behavior to those factors, but we routinely don’t make the same allowance for others and instead assume that their behavior reflects who they are. Why did that guy act like a jerk? Because he is a jerk. This is a potent bias. If a student is told to speak in support of a Republican candidate, an observer will tend to see the student as pro-Republican even if the student only did what she was told to do—and even if the observer is the one who gave the order! Stepping outside ourselves and seeing things as others do is that hard.”
Philip Tetlock, Superforecasting: The Art and Science of Prediction
“Laws of physics aside, there are no universal constants, so separating the predictable from the unpredictable is difficult work. There’s no way around it.”
Philip Tetlock, Superforecasting: The Art and Science of Prediction
“Unpredictability and predictability coexist uneasily in the intricately interlocking systems that make up our bodies, our societies, and the cosmos. How predictable something is depends on what we are trying to predict, how far into the future, and under what circumstances.”
Philip Tetlock, Superforecasting: The Art and Science of Prediction
“This is a big reason for the “skeptic” half of my “optimistic skeptic” stance. We live in a world where the actions of one nearly powerless man can have ripple effects around the world—ripples that affect us all to varying degrees.”
Philip Tetlock, Superforecasting: The Art and Science of Prediction
“Call me an “optimistic skeptic.”
Philip Tetlock, Superforecasting: The Art and Science of Prediction
“There’s also the “premortem,” in which the team is told to assume a course of action has failed and to explain why—which makes team members feel safe to express doubts they may have about the leader’s plan. But the superteams did not start with leaders and norms, which created other challenges.”
Philip Tetlock, Superforecasting: The Art and Science of Prediction
“When people gather and discuss in a group, independence of thought and expression can be lost. Maybe one person is a loudmouth who dominates the discussion, or a bully, or a superficially impressive talker, or someone with credentials that cow others into line. In so many ways, a group can get people to abandon independent judgment and buy into errors. When that happens, the mistakes will pile up, not cancel out. This is the root of collective folly, whether it’s Dutch investors in the seventeenth century, who became collectively convinced that a tulip bulb was worth more than a laborer’s annual salary, or American home buyers in 2005, talking themselves into believing that real estate prices could only go up. But loss of independence isn’t inevitable in a group, as JFK’s team showed during the Cuban missile crisis. If forecasters can keep questioning themselves and their teammates, and welcome vigorous debate, the group can become more than the sum of its parts.”
Philip Tetlock, Superforecasting: The Art and Science of Prediction
“The Renzettis live in a small house at 84 Chestnut Avenue. Frank Renzetti is forty-four and works as a bookkeeper for a moving company. Mary Renzetti is thirty-five and works part-time at a day care. They have one child, Tommy, who is five. Frank’s widowed mother, Camila, also lives with the family. My question: How likely is it that the Renzettis have a pet? To answer that, most people would zero in on the family’s details. “Renzetti is an Italian name,” someone might think. “So are ‘Frank’ and ‘Camila.’ That may mean Frank grew up with lots of brothers and sisters, but he’s only got one child. He probably wants to have a big family but he can’t afford it. So it would make sense that he compensated a little by getting a pet.” Someone else might think, “People get pets for kids and the Renzettis only have one child, and Tommy isn’t old enough to take care of a pet. So it seems unlikely.” This sort of storytelling can be very compelling, particularly when the available details are much richer than what I’ve provided here. But superforecasters wouldn’t bother with any of that, at least not at first. The first thing they would do is find out what percentage of American households own a pet. Statisticians call that the base rate—how common something is within a broader class. Daniel Kahneman has a much more evocative visual term for it. He calls it the “outside view”—in contrast to the “inside view,” which is the specifics of the particular case. A few minutes with Google tells me about 62% of American households own pets. That’s the outside view here. Starting with the outside view means I will start by estimating that there is a 62% chance the Renzettis have a pet. Then I will turn to the inside view—all those details about the Renzettis—and use them to adjust that initial 62% up or down. It’s natural to be drawn to the inside view. It’s usually concrete and filled with engaging detail we can use to craft a story about what’s going on. The outside view is typically abstract, bare, and doesn’t lend itself so readily to storytelling. So even smart, accomplished people routinely fail to consider the outside view. The Wall Street Journal columnist and former Reagan speechwriter Peggy Noonan once predicted trouble for the Democrats because polls had found that George W. Bush’s approval rating, which had been rock-bottom at the end of his term, had rebounded to 47% four years after leaving office, equal to President Obama’s. Noonan found that astonishing—and deeply meaningful.9 But if she had considered the outside view she would have discovered that presidential approval always rises after a president leaves office. Even Richard Nixon’s number went up. So Bush’s improved standing wasn’t surprising in the least—which strongly suggests the meaning she drew from it was illusory. Superforecasters don’t make that mistake. If Bill Flack were asked whether, in the next twelve months, there would be an armed clash between China and Vietnam over some border dispute, he wouldn’t immediately delve into the particulars of that border dispute and the current state of China-Vietnam relations. He would instead look at how often there have been armed clashes in the past. “Say we get hostile conduct between China and Vietnam every five years,” Bill says. “I’ll use a five-year recurrence model to predict the future.” In any given year, then, the outside view would suggest to Bill there is a 20% chance of a clash. Having established that, Bill would look at the situation today and adjust that number up or down.”
Philip Tetlock, Superforecasting: The Art and Science of Prediction
“Consider a guess-the-number game in which players must guess a number between 0 and 100. The person whose guess comes closest to two-thirds of the average guess of all contestants wins. That’s it. And imagine there is a prize: the reader who comes closest to the correct answer wins a pair of business-class tickets for a flight between London and New York. The Financial Times actually held this contest in 1997, at the urging of Richard Thaler, a pioneer of behavioral economics. If I were reading the Financial Times in 1997, how would I win those tickets? I might start by thinking that because anyone can guess anything between 0 and 100 the guesses will be scattered randomly. That would make the average guess 50. And two-thirds of 50 is 33. So I should guess 33. At this point, I’m feeling pretty pleased with myself. I’m sure I’ve nailed it. But before I say “final answer,” I pause, think about the other contestants, and it dawns on me that they went through the same thought process as I did. Which means they all guessed 33 too. Which means the average guess is not 50. It’s 33. And two-thirds of 33 is 22. So my first conclusion was actually wrong. I should guess 22. Now I’m feeling very clever indeed. But wait! The other contestants also thought about the other contestants, just as I did. Which means they would have all guessed 22. Which means the average guess is actually 22. And two-thirds of 22 is about 15. So I should … See where this is going? Because the contestants are aware of each other, and aware that they are aware, the number is going to keep shrinking until it hits the point where it can no longer shrink. That point is 0. So that’s my final answer. And I will surely win. My logic is airtight. And I happen to be one of those highly educated people who is familiar with game theory, so I know 0 is called the Nash equilibrium solution. QED. The only question is who will come with me to London. Guess what? I’m wrong. In the actual contest, some people did guess 0, but not many, and 0 was not the right answer. It wasn’t even close to right. The average guess of all the contestants was 18.91, so the winning guess was 13. How did I get this so wrong? It wasn’t my logic, which was sound. I failed because I only looked at the problem from one perspective—the perspective of logic. Who are the other contestants? Are they all the sort of people who would think about this carefully, spot the logic, and pursue it relentlessly to the final answer of 0?”
Philip Tetlock, Superforecasting: The Art and Science of Prediction
“the more famous an expert was, the less accurate he was. That’s not because editors, producers, and the public go looking for bad forecasters. They go looking for hedgehogs, who just happen to be bad forecasters. Animated by a Big Idea, hedgehogs tell tight, simple, clear stories that grab and hold audiences. As anyone who has done media training knows, the first rule is “keep it simple, stupid.” Better still, hedgehogs are confident. With their one-perspective analysis, hedgehogs can pile up reasons why they are right—“furthermore,” “moreover”—without considering other perspectives and the pesky doubts and caveats they raise. And so, as EPJ showed, hedgehogs are likelier to say something definitely will or won’t happen. For many audiences, that’s satisfying. People tend to find uncertainty disturbing and “maybe” underscores uncertainty with a bright red crayon. The simplicity and confidence of the hedgehog impairs foresight, but it calms nerves—which is good for the careers of hedgehogs. Foxes don’t fare so well in the media. They’re less confident, less likely to say something is “certain” or “impossible,” and are likelier to settle on shades of “maybe.” And their stories are complex, full of “howevers” and “on the other hands,” because they look at problems one way, then another, and another. This aggregation of many perspectives is bad TV. But it’s good forecasting. Indeed, it’s essential.”
Philip Tetlock, Superforecasting: The Art and Science of Prediction
“Larry Kudlow hosted a business talk show on CNBC and is a widely published pundit, but he got his start as an economist in the Reagan administration and later worked with Art Laffer, the economist whose theories were the cornerstone of Ronald Reagan’s economic policies. Kudlow’s one Big Idea is supply-side economics. When President George W. Bush followed the supply-side prescription by enacting substantial tax cuts, Kudlow was certain an economic boom of equal magnitude would follow. He dubbed it “the Bush boom.” Reality fell short: growth and job creation were positive but somewhat disappointing relative to the long-term average and particularly in comparison to that of the Clinton era, which began with a substantial tax hike. But Kudlow stuck to his guns and insisted, year after year, that the “Bush boom” was happening as forecast, even if commentators hadn’t noticed. He called it “the biggest story never told.” In December 2007, months after the first rumblings of the financial crisis had been felt, the economy looked shaky, and many observers worried a recession was coming, or had even arrived, Kudlow was optimistic. “There is no recession,” he wrote. “In fact, we are about to enter the seventh consecutive year of the Bush boom.”19 The National Bureau of Economic Research later designated December 2007 as the official start of the Great Recession of 2007–9. As the months passed, the economy weakened and worries grew, but Kudlow did not budge. There is no recession and there will be no recession, he insisted. When the White House said the same in April 2008, Kudlow wrote, “President George W. Bush may turn out to be the top economic forecaster in the country.”20 Through the spring and into summer, the economy worsened but Kudlow denied it. “We are in a mental recession, not an actual recession,”21 he wrote, a theme he kept repeating until September 15, when Lehman Brothers filed for bankruptcy, Wall Street was thrown into chaos, the global financial system froze, and people the world over felt like passengers in a plunging jet, eyes wide, fingers digging into armrests. How could Kudlow be so consistently wrong? Like all of us, hedgehog forecasters first see things from the tip-of-your-nose perspective. That’s natural enough. But the hedgehog also “knows one big thing,” the Big Idea he uses over and over when trying to figure out what will happen next. Think of that Big Idea like a pair of glasses that the hedgehog never takes off. The hedgehog sees everything through those glasses. And they aren’t ordinary glasses. They’re green-tinted glasses—like the glasses that visitors to the Emerald City were required to wear in L. Frank Baum’s The Wonderful Wizard of Oz. Now, wearing green-tinted glasses may sometimes be helpful, in that they accentuate something real that might otherwise be overlooked. Maybe there is just a trace of green in a tablecloth that a naked eye might miss, or a subtle shade of green in running water. But far more often, green-tinted glasses distort reality. Everywhere you look, you see green, whether it’s there or not. And very often, it’s not. The Emerald City wasn’t even emerald in the fable. People only thought it was because they were forced to wear green-tinted glasses! So the hedgehog’s one Big Idea doesn’t improve his foresight. It distorts it. And more information doesn’t help because it’s all seen through the same tinted glasses. It may increase the hedgehog’s confidence, but not his accuracy. That’s a bad combination.”
Philip Tetlock, Superforecasting: The Art and Science of Prediction
“Consider a forecast Steve Ballmer made in 2007, when he was CEO of Microsoft: “There’s no chance that the iPhone is going to get any significant market share. No chance.” Ballmer’s forecast is infamous. Google “Ballmer” and “worst tech predictions”—or “Bing” it, as Ballmer would prefer—and you will see it enshrined in the forecasting hall of shame, along with such classics as the president of Digital Equipment Corporation declaring in 1977 that “there is no reason anyone would want a computer in their home.” And that seems fitting because Ballmer’s forecast looks spectacularly wrong. As the author of “The Ten Worst Tech Predictions of All Time” noted in 2013, “the iPhone commands 42% of US smartphone market share and 13.1% worldwide.”1 That’s pretty “significant.” As another journalist wrote, when Ballmer announced his departure from Microsoft in 2013, “The iPhone alone now generates more revenue than all of Microsoft.”2”
Philip Tetlock, Superforecasting: The Art and Science of Prediction
“Often, I cannot explain a certain move, only know that it feels right, and it seems that my intuition is right more often than not,” observed the Norwegian prodigy Magnus Carlsen, the world chess champion and the highest-ranked player in history. “If I study a position for an hour then I am usually going in loops and I’m probably not going to come up with something useful. I usually know what I am going to do after 10 seconds; the rest is double-checking.”23 Carlsen respects his intuition, as well he should, but he also does a lot of “double-checking” because he knows that sometimes intuition can let him down and conscious thought can improve his judgment.”
Philip Tetlock, Superforecasting: The Art and Science of Prediction