The Signal and the Noise: Why So Many Predictions Fail-but Some Don't
Rate it:
Open Preview
2%
Flag icon
but with the printing press. Johannes Gutenberg’s invention in 1440 made information available to the masses, and the explosion of ideas it produced had unintended consequences and unpredictable effects.
Joe Soltzberg
Like the internet, it's real value was: It accelerated the proliferation of ideas. What other ways can this be done that we haven't thought of yet?
Sabah and 1 other person liked this
2%
Flag icon
The Industrial Revolution largely began in Protestant countries and largely in those with a free press, where both religious and scientific ideas could flow without fear of censorship.
3%
Flag icon
If there is one thing that defines Americans—one thing that makes us exceptional—it is our belief in Cassius’s idea that we are in control of our own fates.
3%
Flag icon
A long-term study by Philip E. Tetlock of the University of Pennsylvania found that when political scientists claimed that a political outcome had absolutely no chance of occurring, it nevertheless happened about 15 percent of the time. (The political scientists are probably better than television pundits, however.)
3%
Flag icon
The information overload after the birth of the printing press produced greater sectarianism.
Joe Soltzberg
Not convinced about this or any causal tie
4%
Flag icon
The most calamitous failures of prediction usually have a lot in common. We focus on those signals that tell a story about the world as we would like it to be, not how it really is. We ignore the risks that are hardest to measure, even when they pose the greatest threats to our well-being. We make approximations and assumptions about the world that are much cruder than we realize. We abhor uncertainty, even when it is an irreducible part of the problem we are trying to solve.
5%
Flag icon
Instances of the two-word phrase “housing bubble” had appeared in just eight news accounts in 200120 but jumped to 3,447 references by 2005. The housing bubble was discussed about ten times per day in reputable newspapers and periodicals.
Joe Soltzberg
So we could have known...
5%
Flag icon
One reason that S&P and Moody’s enjoyed such a dominant market presence is simply that they had been a part of the club for a long time. They are part of a legal oligopoly; entry into the industry is limited by the government. Meanwhile, a seal of approval from S&P and Moody’s is often mandated by the bylaws of large pension funds,25 about two-thirds of which26 mention S&P, Moody’s, or both by name, requiring that they rate a piece of debt before the pension fund can purchase it.
5%
Flag icon
Moody’s28 revenue from so-called structured-finance ratings increased by more than 800 percent between 1997 and 2007 and came to represent the majority of their ratings business during the bubble years.29 These products helped Moody’s to the highest profit margin of any company in the S&P 500 for five consecutive years during the housing bubble.30 (In 2010, even after the bubble burst and the problems with the ratings agencies had become obvious, Moody’s still made a 25 percent profit.
5%
Flag icon
It might have been fine had the potential for error in their forecasts been linear and arithmetic. But leverage, or investments financed by debt, can make the error in a forecast compound many times over, and introduces the potential of highly geometric and nonlinear mistakes.
6%
Flag icon
Risk greases the wheels of a free-market economy; uncertainty grinds them to a halt.
7%
Flag icon
“We had too much greed and too little fear,” Summers told me in 2009. “Now we have too much fear and too little greed.”
Joe Soltzberg
These would be cool indices. But how do you even go about quantifying this?
8%
Flag icon
The problem, of course, is that of those 20,000 car trips, none occurred when you were anywhere near this drunk. Your sample size for drunk driving is not 20,000 trips but zero, and you have no way to use your past experience to forecast your accident risk. This is an example of an out-of-sample problem.
10%
Flag icon
In 2011, he said that Donald Trump would run for the Republican nomination—and had a “damn good” chance of winning it.19 All those predictions turned out to be horribly wrong.
Joe Soltzberg
Well well....
10%
Flag icon
One of Tetlock’s more remarkable findings is that, while foxes tend to get better at forecasting with experience, the opposite is true of hedgehogs: their performance tends to worsen as they pick up additional credentials.
10%
Flag icon
Tetlock believes the more facts hedgehogs have at their command, the more opportunities they have to permute and manipulate them in ways that confirm their biases.
10%
Flag icon
You can apply Tetlock’s test to diagnose whether you are a hedgehog: Do your predictions improve when you have access to more information?
Joe Soltzberg
Cool litmus test
11%
Flag icon
We have trouble distinguishing a 90 percent chance that the plane will land safely from a 99 percent chance or a 99.9999 percent chance, even though these imply vastly different things about whether we ought to book our ticket.
11%
Flag icon
“When the facts change, I change my mind,” the economist John Maynard Keynes famously said. “What do you do, sir?”
21%
Flag icon
The bigger question is why, if these longer-term forecasts aren’t any good, outlets like the Weather Channel (which publishes ten-day forecasts) and AccuWeather (which ups the ante and goes for fifteen) continue to produce them.
21%
Flag icon
forecasters rarely predict exactly a 50 percent chance of rain, which might seem wishy-washy and indecisive to consumers.41 Instead, they’ll flip a coin and round up to 60, or down to 40, even though this makes the forecasts both less accurate and less honest.
Joe Soltzberg
wow
21%
Flag icon
Most commercial weather forecasts are biased, and probably deliberately so. In particular, they are biased toward forecasting more precipitation than will actually occur43—what meteorologists call a “wet bias.”
21%
Flag icon
The National Weather Service’s forecasts are, it turns out, admirably well calibrated46 (figure 4-7). When they say there is a 20 percent chance of rain, it really does rain 20 percent of the time. They have been making good use of feedback, and their forecasts are honest and accurate.
21%
Flag icon
People notice one type of mistake—the failure to predict rain—more than another kind, false alarms. If it rains when it isn’t supposed to, they curse the weatherman for ruining their picnic, whereas an unexpectedly sunny day is taken as a serendipitous bonus. It isn’t good science, but as Dr. Rose at the Weather Channel acknowledged to me: “If the forecast was objective, if it has zero bias in precipitation, we’d probably be in trouble.”
21%
Flag icon
People respond to what they hear from local officials.
Joe Soltzberg
Relevant in 2020
22%
Flag icon
One lesson from Katrina, however, is that accuracy is the best policy for a forecaster.
23%
Flag icon
earthquakes cannot be predicted.
32%
Flag icon
Extrapolation tends to cause its greatest problems in fields—including population growth and disease—where the quantity that you want to study is growing exponentially.
43%
Flag icon
mistake? His anxiety over Deep Blue’s forty-fourth move in the first game—the move in which the computer had moved its rook for no apparent purpose. Kasparov had concluded that the counterintuitive play must be a sign of superior intelligence. He had never considered that it was simply a bug. For as much as we
49%
Flag icon
Why so much trading occurs is one of the greatest mysteries in finance.5 More and more people seem to think they can outpredict the collective wisdom of the market. Are these traders being rational? And if not, can we expect the market to settle on a rational price?
49%
Flag icon
Second, the most robust evidence indicates that this wisdom-of-crowds principle holds when forecasts are made independently before being averaged together.
50%
Flag icon
The paper, although it would later be cited more than 4,000 times,26 at first received about as much attention as most things published by University of Chicago graduate students.27 But it had laid the groundwork for efficient-market hypothesis. The central claim of the theory is that the movement of the stock market is unpredictable to any meaningful extent.
64%
Flag icon
This mathematical argument for a focus on larger-scale threats cuts somewhat against the day-to-day imperatives of those who are actively involved in homeland security. In 1982, the social scientists James Q. Wilson and George L. Kelling introduced what they called the “broken windows” theory of crime deterrence.