More on this book
Community
Kindle Notes & Highlights
by
Nate Silver
Read between
August 9 - August 25, 2015
Human beings have an extraordinary capacity to ignore risks that threaten their livelihood, as though this will make them go away.
The more interviews that an expert had done with the press, Tetlock found, the worse his predictions tended to be.
if you have reason to think that yesterday’s forecast was wrong, there is no glory in sticking to it.
It is the alternative—failing to change our forecast because we risk embarrassment by doing so—that reveals a lack of courage.
Wherever there is human judgment there is the potential for bias.
A forecaster should almost never ignore data, especially when she is studying rare events like recessions or presidential elections, about which there isn’t very much data to begin with. Ignoring data is often a tip-off that the forecaster is overconfident, or is overfitting her model—that she is interested in showing off rather than trying to be accurate.
An economic model conditioned on the notion that nothing major will change is a useless one.
Successful gamblers—and successful forecasters of any kind—do not think of the future in terms of no-lose bets, unimpeachable theories, and infinitely precise measurements. These are the illusions of the sucker, the sirens of his overconfidence.
Finding patterns is easy in any kind of data-rich environment; that’s what mediocre gamblers do. The key is in determining whether the patterns represent noise or signal.
don’t blame nature because you are too daft to understand it:
Essentially, the frequentist approach toward statistics seeks to wash its hands of the reason that predictions most often go wrong: human error.
Making predictions based on our beliefs is the best (and perhaps even the only) way to test ourselves. If objectivity is the concern for a greater truth beyond our personal circumstances, and prediction is the best way to examine how closely aligned our personal perceptions are with that greater truth, the most objective among us are those who make the most accurate predictions.
sometimes the only solution when the data is very noisy—is to focus more on process than on results.
As John Maynard Keynes said, “The market can stay irrational longer than you can stay solvent.”
A forecaster who says he doesn’t care about the science is like the cook who says he doesn’t care about food.
It is much easier after the event to sort the relevant from the irrelevant signals. After the event, of course, a signal is always crystal clear; we can now see what disaster it was signaling, since the disaster has occurred. But before the event it is obscure and pregnant with conflicting meanings. It comes to the observer embedded in an atmosphere of “noise,” i.e., in the company of all sorts of information that is useless and irrelevant for predicting the particular disaster.
There is a tendency in our planning to confuse the unfamiliar with the improbable. The contingency we have not considered seriously looks strange; what looks strange is thought improbable; what is improbable need not be considered seriously.
Bayes’s theorem requires us to state—explicitly—how likely we believe an event is to occur before we begin to weigh the evidence. It calls this estimate a prior belief.
What isn’t acceptable under Bayes’s theorem is to pretend that you don’t have any prior beliefs. You should work to reduce your biases, but to say you have none is a sign that you have many. To state your beliefs up front—to say “Here’s where I’m coming from”12—is a way to operate in good faith and to recognize that you perceive reality through a subjective filter.