The Signal and the Noise: Why So Many Predictions Fail-but Some Don't
Rate it:
Open Preview
Kindle Notes & Highlights
5%
Flag icon
Human beings have an extraordinary capacity to ignore risks that threaten their livelihood, as though this will make them go away.
9%
Flag icon
The more interviews that an expert had done with the press, Tetlock found, the worse his predictions tended to be.
11%
Flag icon
if you have reason to think that yesterday’s forecast was wrong, there is no glory in sticking to it.
11%
Flag icon
It is the alternative—failing to change our forecast because we risk embarrassment by doing so—that reveals a lack of courage.
12%
Flag icon
Wherever there is human judgment there is the potential for bias.
29%
Flag icon
A forecaster should almost never ignore data, especially when she is studying rare events like recessions or presidential elections, about which there isn’t very much data to begin with. Ignoring data is often a tip-off that the forecaster is overconfident, or is overfitting her model—that she is interested in showing off rather than trying to be accurate.
29%
Flag icon
An economic model conditioned on the notion that nothing major will change is a useless one.
36%
Flag icon
Successful gamblers—and successful forecasters of any kind—do not think of the future in terms of no-lose bets, unimpeachable theories, and infinitely precise measurements. These are the illusions of the sucker, the sirens of his overconfidence.
36%
Flag icon
Finding patterns is easy in any kind of data-rich environment; that’s what mediocre gamblers do. The key is in determining whether the patterns represent noise or signal.
36%
Flag icon
don’t blame nature because you are too daft to understand it:
38%
Flag icon
Essentially, the frequentist approach toward statistics seeks to wash its hands of the reason that predictions most often go wrong: human error.
39%
Flag icon
Making predictions based on our beliefs is the best (and perhaps even the only) way to test ourselves. If objectivity is the concern for a greater truth beyond our personal circumstances, and prediction is the best way to examine how closely aligned our personal perceptions are with that greater truth, the most objective among us are those who make the most accurate predictions.
48%
Flag icon
sometimes the only solution when the data is very noisy—is to focus more on process than on results.
53%
Flag icon
As John Maynard Keynes said, “The market can stay irrational longer than you can stay solvent.”
59%
Flag icon
A forecaster who says he doesn’t care about the science is like the cook who says he doesn’t care about food.
61%
Flag icon
It is much easier after the event to sort the relevant from the irrelevant signals. After the event, of course, a signal is always crystal clear; we can now see what disaster it was signaling, since the disaster has occurred. But before the event it is obscure and pregnant with conflicting meanings. It comes to the observer embedded in an atmosphere of “noise,” i.e., in the company of all sorts of information that is useless and irrelevant for predicting the particular disaster.
61%
Flag icon
There is a tendency in our planning to confuse the unfamiliar with the improbable. The contingency we have not considered seriously looks strange; what looks strange is thought improbable; what is improbable need not be considered seriously.
66%
Flag icon
Bayes’s theorem requires us to state—explicitly—how likely we believe an event is to occur before we begin to weigh the evidence. It calls this estimate a prior belief.
66%
Flag icon
What isn’t acceptable under Bayes’s theorem is to pretend that you don’t have any prior beliefs. You should work to reduce your biases, but to say you have none is a sign that you have many. To state your beliefs up front—to say “Here’s where I’m coming from”12—is a way to operate in good faith and to recognize that you perceive reality through a subjective filter.