The Signal and the Noise: Why So Many Predictions Fail-but Some Don't
Rate it:
Open Preview
8%
Flag icon
four major failures of prediction
8%
Flag icon
housing bubble can be thought of as a poor prediction.
8%
Flag icon
failure on the part of the ratings agencies,
8%
Flag icon
failure to anticipate how a housing crisis could trigger a global financial crisis.
8%
Flag icon
failure to predict the scope of the economic problems that it might create.
8%
Flag icon
There is a technical term for this type of problem: the events these forecasters were considering were out of sample. When there is a major failure of prediction, this problem usually has its fingerprints all over the crime scene.
8%
Flag icon
The housing collapse was an out-of-sample event, and their models were worthless for evaluating default risk under those conditions.
8%
Flag icon
forecasters often resist considering these out-of-sample problems.
8%
Flag icon
One of the pervasive risks that we face in the information age, as I wrote in the introduction, is that even if the amount of knowledge in the world is increasing, the gap between what we know and what we think we know may be widening.
9%
Flag icon
The panel may as well have been flipping coins. I determined 338 of their predictions to be either mostly or completely false. The exact same number—338—were either
9%
Flag icon
himself—each received almost identical scores ranging from 49 percent to 52 percent, meaning that they were about as likely to get a prediction right as
9%
Flag icon
Political experts had difficulty anticipating the USSR’s collapse, Tetlock found, because a prediction that not only forecast the regime’s demise but also understood the reasons for it required different strands of argument to be woven together.
9%
Flag icon
Tetlock’s conclusion was damning. The experts in his survey—regardless of their occupation, experience, or subfield—had done barely any better than random chance, and they had done worse than even rudimentary statistical methods at predicting future political events.
9%
Flag icon
about 15 percent of events that they claimed had no chance of occurring in fact happened, while about 25 percent of those that they said were absolutely sure things in fact failed to
9%
Flag icon
The more interviews that an expert had done with the press, Tetlock found, the worse his predictions tended to be.
9%
Flag icon
“The fox knows many little things, but the hedgehog knows one big thing.”
9%
Flag icon
basic idea is that writers and thinkers can be divided into two broad categories:
9%
Flag icon
Hedgehogs are type A personalities who believe in Big Ideas—in governing principles about the world that behave as though they were physical laws and undergird virtually every interaction in society.
9%
Flag icon
, on the other hand, are scrappy creatures who believe in a plethora of little ideas and in taking a multitude of approaches toward a problem. They tend to be more tolerant of nuance, uncertainty, complexity, and dissenting opinion.
9%
Flag icon
Foxes, Tetlock found, are considerably better at forecasting than hedgehogs.
9%
Flag icon
Big, bold, hedgehog-like predictions, in other words, are more likely to get you on television.
10%
Flag icon
In 2011, he said that Donald Trump would run for the Republican nomination—and had a “damn good” chance of winning it.19 All those predictions turned out to be horribly wrong.
10%
Flag icon
Foxes sometimes have more trouble fitting into type A cultures like television, business, and politics. Their belief that many problems are hard to forecast—and that we should be explicit about accounting for these uncertainties—may be mistaken for a lack of self-confidence.
10%
Flag icon
Fox-like attitudes may be especially important when it comes to making predictions about politics. There are some particular traps that can make suckers of hedgehogs in the arena of political prediction and which foxes are more careful to avoid.
10%
Flag icon
One of these is simply partisan ideology.
10%
Flag icon
Tetlock believes the more facts hedgehogs have at their command, the more opportunities they have to permute and manipulate them in ways that confirm their biases.
10%
Flag icon
You can get lost in the narrative. Politics may be especially susceptible to poor predictions precisely because of its human elements: a good election engages our dramatic sensibilities. This does not mean that you must feel totally dispassionate about a political event in order to make a good prediction about it. But it does mean that a fox’s aloof attitude can pay dividends.
10%
Flag icon
Almost all the forecasts that I publish, in politics and other fields, are probabilistic. Instead of spitting out just one number and claiming to know exactly what will happen, I instead articulate a range of possible outcomes.
10%
Flag icon
The wide distribution of outcomes represented the most honest expression of the uncertainty in the real world.
11%
Flag icon
The FiveThirtyEight models provide much of their value in this way. It’s very easy to look at an election, see that one candidate is ahead in all or most of the polls, and determine that he’s the favorite to win. (With some exceptions, this assumption will be correct.) What becomes much trickier is determining exactly how much of a favorite he is. Our brains, wired to detect patterns, are always looking for a signal, when instead we should appreciate how noisy the data is.
11%
Flag icon
Another misconception is that a good prediction shouldn’t change.
11%
Flag icon
the right attitude is that you should make the best forecast possible today—regardless of what you said last week, last month, or last year.
11%
Flag icon
“When the facts change, I change my mind,” the economist John Maynard Keynes famously said. “What do you do, sir?”
11%
Flag icon
Every hedgehog fantasizes that they will make a daring, audacious, outside-the-box prediction—one that differs radically from the consensus view on a subject.
11%
Flag icon
The expert consensus can be wrong—someone
11%
Flag icon
Even though foxes, myself included, aren’t really a conformist lot, we get worried anytime our forecasts differ radically from those being produced by our competitors.
11%
Flag icon
There were several other models that took a similar approach, claiming they had boiled down something as complex as a presidential election to a two-variable formula.
11%
Flag icon
The failure of these magic-bullet forecasting models came even though they were quantitative, relying on published economic statistics.
11%
Flag icon
To be certain, I have a strong preference for more quantitative approaches in my own forecasts. But hedgehogs can take any type of information and have it reinforce their biases, while foxes who have practice in weighing different types of information together can sometimes benefit from accounting for qualitative along with quantitative factors.
12%
Flag icon
the Cook forecasts have a good track record even when they disagree with quantitative indicators like
12%
Flag icon
The most unique feature of Cook’s process is their candidate interviews.
12%
Flag icon
Wasserman instead considers everything in the broader political context. A terrific Democratic candidate who aces her interview might not stand a chance in a district that the Republican normally wins by twenty points. So why bother with the candidate interviews at all?
12%
Flag icon
Mostly, Wasserman is looking for red flags—like
12%
Flag icon
In this book, I use the terms objective and subjective carefully. The word objective is sometimes taken to be synonymous with quantitative, but it isn’t. Instead it means seeing beyond our personal biases and prejudices and toward the truth of a
12%
Flag icon
Wherever there is human judgment there is the potential for bias.
« Prev 1 2 Next »