More on this book
Community
Kindle Notes & Highlights
To reduce the role of luck, the researchers examined how participants did, on average, across numerous forecasts.
make their forecasts in terms of probabilities that an event would happen,
However, given our objective ignorance of future events, it is much better to formulate probabilistic forecasts.
well calibrated.
They gave the participants the opportunity to revise their forecasts continuously in light of new information.
allowed the forecasters to update their forecasts.
each one of these updates is treated as a new forecast.
“When the facts change, I change my mind. What do you do?”)
Brier scores, as they are known, measure the distance between what people forecast and what actually happens.
pervasive problem associated with probabilistic forecasts: the incentive for forecasters to hedge their bets by never taking a bold stance.
Margaret’s forecasts are well calibrated but also practically useless.
high resolution in addition to good calibration.
Brier scores are based on the logic of mean squared errors, and lower scores are better: a score of 0 would be perfect.
overwhelming majority of the volunteers did poorly, but about 2% stood out.
calls these well-performing people superforecasters.
This comparison is worth pausing over.
superforecasters
unusually intelligent.
superforecasters are unusually good with numbers.
ease in thinking analytically and probabilistically.
structure and disaggregat...
This highlight has been truncated due to consecutive passage length restrictions.
break it up into its component parts.
they ask and try to answer an assortment of subsidiary questions.
taking the outside view, and they care a lot about base rates.
isn’t their sheer intelligence; it’s how they apply it.
high level of “active open-mindedness.”
not shy about updating their judgments (without overreacting) when new information becomes available.
“the strongest predictor of rising into the ranks of superforecasters is perpetual beta, the degree to which one is committed to belief updating and self-improvement.”
They like a particular cycle of thinking: “try, fail, analyze, adjust, try again.”
tested the effect of different interventions on the quality of subsequent judgments.
three of the strategies we have described to improve judgments:
probabilistic reasoning.
learned about various biases
importance of averaging multiple predictions from...
This highlight has been truncated due to consecutive passage length restrictions.
Teaming could increase accuracy by encouraging forecasters to deal with opposing arguments and to be actively open-minded.
three interventions worked,
three major reasons why some forecasters can perform better or worse than others:
more skilled at finding and analyzing data in the environment that are relevant to the prediction they have to make.
forecasters may have a general tendency to err on a particular side of the true value
noisy in their use of the probability scale.
BIN
model for forecasting.
all three interventions worked primarily by reducing noise.
boosted accuracy, it worked mainly by suppressing random errors in judgment.
Tetlock’s training is designed to fight psychological biases.
training forecasters to fight their psychological biases works—by reducing noise.
When working in groups, the superforecasters seem capable of avoiding the dangers of group polarization and information cascades.
Selection had the largest total effect.
But the main effect of selection is, again, to reduce noise.
may owe their success more to superior discipline in tamping down measurement error,