More on this book
Community
Kindle Notes & Highlights
by
Nate Silver
Read between
January 30 - April 26, 2019
Throughout essentially all of human history, economic growth had proceeded at a rate of perhaps 0.1 percent per year, enough to allow for a very gradual increase in population, but not any growth in per capita living standards.26 And then, suddenly, there was progress when there had been none. Economic growth began to zoom upward much faster than the growth rate of the population, as it has continued to do through to the present day, the occasional global financial meltdown notwithstanding.
a certain amount of immersion in a topic will provide disproportionately more insight than an executive summary.
Nobody saw it coming. When you can’t state your innocence, proclaim your ignorance: this is often the first line of defense when there is a failed forecast.
Risk, as first articulated by the economist Frank H. Knight in 1921,45 is something that you can put a price on.
Uncertainty, on the other hand, is risk that is hard to measure. You might have some vague awareness of the demons lurking out there. You might even be acutely concerned about them. But you have no real idea how many of them there are or when they might strike.
Risk greases the wheels of a free-market economy; uncertainty grinds them to a halt.
Supply and demand is an example of a negative feedback: as prices go up, sales go down. Despite their name, negative feedbacks are a good thing for a market economy.
Greed and fear are volatile quantities, however, and the balance can get out of whack. When there is an excess of greed in the system, there is a bubble. When there is an excess of fear, there is a panic.
Or say that you are considering buying another type of asset: a mortgage-backed security. This type of commodity may be even harder to value. But the more investors buy them—and the more the ratings agencies vouch for them—the more confidence you might have that they are safe and worthwhile investments. Hence, you have a positive feedback—and the potential for a bubble.
financial crises typically produce rises in unemployment that persist for four to six years.86 Another study by Reinhart, which focused on more recent financial crises, found that ten of the last fifteen countries to endure one had never seen their unemployment rates recover to their precrisis levels.
There is a technical term for this type of problem: the events these forecasters were considering were out of sample. When there is a major failure of prediction, this problem usually has its fingerprints all over the crime scene.
forecasters often resist considering these out-of-sample problems. When we expand our sample to include events further apart from us in time and space, it often means that we will encounter cases in which the relationships we are studying did not hold up as well as we are accustomed to. The model will seem to be less powerful. It will look less impressive in a PowerPoint presentation (or a journal article or a blog post). We will be forced to acknowledge that we know less about the world than we thought we did. Our personal and professional incentives almost always discourage us from doing
...more
even if the amount of knowledge in the world is increasing, the gap between what we know and what we think we know may be widening.
The experts in his survey—regardless of their occupation, experience, or subfield—had done barely any better than random chance, and they had done worse than even rudimentary statistical methods at predicting future political events. They were grossly overconfident and terrible at calculating probabilities: about 15 percent of events that they claimed had no chance of occurring in fact happened, while about 25 percent of those that they said were absolutely sure things in fact failed to occur.15 It didn’t matter whether the experts were making predictions about economics, domestic politics, or
...more
If hedgehogs are hunters, always looking out for the big kill, then foxes are gatherers.
Whereas the hedgehogs’ forecasts were barely any better than random chance, the foxes’ demonstrated predictive skill.
Foxes sometimes have more trouble fitting into type A cultures like television, business, and politics. Their belief that many problems are hard to forecast—and that we should be explicit about accounting for these uncertainties—may be mistaken for a lack of self-confidence.
Hedgehogs who have lots of information construct stories—stories that are neater and tidier than the real world, with protagonists and villains, winners and losers, climaxes and dénouements—and, usually, a happy ending for the home team.
Politics may be especially susceptible to poor predictions precisely because of its human elements: a good election engages our dramatic sensibilities. This does not mean that you must feel totally dispassionate about a political event in order to make a good prediction about it. But it does mean that a fox’s aloof attitude can pay dividends.
posting detailed and data-driven analyses on issues like polls and fundraising numbers. I studied which polling firms had been most accurate in the past, and how much winning one state—Iowa, for instance—tended to shift the numbers in another.
The FiveThirtyEight forecasting model started out pretty simple—basically, it took an average of polls but weighted them according to their past accuracy—then gradually became more intricate. But it abided by three broad principles, all of which are very fox-like.
Principle 1: Think Probabilistically
The wide distribution of outcomes represented the most honest expression of the uncertainty in the real world. The forecast was built from forecasts of each of the 435 House seats individually—and an exceptionally large number of those races looked to be extremely close.
But polls do become more accurate the closer you get to Election Day.
Our brains, wired to detect patterns, are always looking for a signal, when instead we should appreciate how noisy the data is.
If you forecast that a particular incumbent congressman will win his race 90 percent of the time, you’re also forecasting that he should lose it 10 percent of the time.28 The signature of a good forecast is that each of these probabilities turns out to be about right over the long run.
Principle 2: Today’s Forecast Is the First Forecast of the Rest of Your Life
Ultimately, the right attitude is that you should make the best forecast possible today—regardless of what you said last week, last month, or last year.
Principle 3: Look for Consensus
foxes have developed an ability to emulate this consensus process. Instead of asking questions of a whole group of experts, they are constantly asking questions of themselves. Often this implies that they will aggregate different types of information together—as a group of people with different ideas about the world naturally would—instead of treating any one piece of evidence as though it is the Holy Grail.
Weighing Qualitative Information
You will need to recognize that there is wisdom in seeing the world from a different viewpoint.
The foxy forecaster recognizes the limitations that human judgment imposes in predicting the world’s course. Knowing those limits can help her to get a few more predictions right.
But statheads can have their biases too. One of the most pernicious ones is to assume that if something cannot easily be quantified, it does not matter.
I identified five different intellectual and psychological abilities that he believes help to predict success at the major-league level.
Preparedness and Work Ethic
Concentration and Focus
Competitiveness and Self-Confidence
Stress Management and Humility
Adaptiveness and Learning Ability
The key to making a good forecast, as we observed in chapter 2, is not in limiting yourself to quantitative information. Rather, it’s having a good process for weighing the information appropriately.
The litmus test for whether you are a competent forecaster is if more information makes your predictions better. If you’re screwing it up, you have some bad habits and attitudes, like Phil Tetlock’s political pundits did.
Many times, in fact, it is possible to translate qualitative information into quantitative information.
When we can’t fit a square peg into a round hole, we’ll usually blame the peg—when sometimes it’s the rigidity of our thinking that accounts for our failure to accommodate it. Our first instinct is to place information into categories—usually a relatively small number of categories since they’ll be easier to keep track of.
This might work well enough most of the time. But when we have trouble categorizing something, we’ll often overlook it or misjudge it.
The categorizations and approximations we make in the normal course of our lives are usually good enough to get by, but sometimes we let information that might give us a competitive advantage slip through the cracks.
The key is to develop tools and habits so that you are more often looking for ideas and information in the right places—and in honing the skills required to harness them into W’s and L’s once you’ve found them.
The weather is the epitome of a dynamic system, and the equations that govern the movement of atmospheric gases and fluids are nonlinear—mostly differential equations.23 Chaos theory therefore most definitely applies to weather forecasting, making the forecasts highly vulnerable to inaccuracies in our data.
the forecasts you actually see reflect a combination of computer and human judgment. Humans can make the computer forecasts better or they can make them worse.
An influential 1993 essay38 by Allan Murphy, then a meteorologist at Oregon State University, posited that there were three definitions of forecast quality