More on this book
Community
Kindle Notes & Highlights
commitment to self-improvement to be the strongest
Ferrucci is right—I suspect he is—we will need to blend computer-based forecasting and subjective judgment in the future. So it’s time we got serious about both.
drink of this treatment recover in a short time, except those whom it does not help, who all die,” he wrote. “It is obvious, therefore, that it fails only in incurable cases.”
As Daniel Kahneman puts it, “System 1 is designed to jump to conclusions from little evidence.”
Magnus Carlsen, the world chess champion and the highest-ranked player in history. “If I study a position for an hour then I am usually going in loops and I’m probably not going to come up with something useful. I usually know what I am going to do after 10 seconds; the rest is double-checking.”
But then the train of history hit a curve, and as Karl Marx once quipped, when that happens, the intellectuals fall off.
“estimating is what you do when you do not know.”
between fame and accuracy: the more famous an expert was, the less accurate he was.
It was the earliest demonstration of a phenomenon popularized by—and now named for—James Surowiecki’s bestseller The Wisdom of Crowds. Aggregating the judgment of many
How well aggregation works depends on what you are aggregating. Aggregating the judgments of many people who know nothing produces a lot of nothing. Aggregating the judgments of people who know a little is better, and if there are enough of them, it can produce impressive results, but aggregating the judgments of an equal number of people who know lots about lots of different things is most effective because the collective pool of information becomes much bigger. Aggregations of aggregations can also yield impressive results.
A fox with the bulging eyes of a dragonfly is an ugly mixed metaphor but it captures a key reason why the foresight of foxes is superior to that of hedgehogs with their green-tinted glasses. Foxes aggregate perspectives.
No model captures the richness of human nature. Models are supposed to simplify things, which is why even the best are flawed. But they’re necessary. Our minds are full of models. We couldn’t function without them. And we often function pretty well because some of our models are decent approximations of reality. “All models are wrong,” the statistician George Box observed, “but some are useful.” The fox/hedgehog model is a starting point, not the end.
Mauboussin notes that slow regression is more often seen in activities dominated by skill, while faster regression is more associated with chance.15
Times. It should also satisfy the reader. If you’ve made it this far, you’ve probably got the right stuff.
“Need for cognition” is the psychological term for the tendency to engage in and enjoy hard mental slogs.
For superforecasters, beliefs are hypotheses to be tested, not treasures to be guarded. It would be facile to reduce superforecasting to a bumper-sticker slogan, but if I had to, that would be it.
It was remarkably late in history—arguably as late as the 1713 publication of Jakob Bernoulli’s Ars Conjectandi—before the best minds started to think seriously about probability.
And people equate confidence and competence, which makes the forecaster who says something has a middling probability of happening less worthy of respect.
If nothing is certain, it follows that the two- and three-setting mental dials are fatally flawed.
Robert Rubin is a probabilistic thinker. As a student at Harvard, he heard a lecture in which a philosophy professor argued there is no provable certainty and “it just clicked with everything I’d sort of thought,” he told me. It became the axiom that guided his thinking through twenty-six years at Goldman Sachs, as an adviser to President Bill Clinton, and as secretary of the Treasury. It’s in the title of his autobiography: In an Uncertain World.
Epistemic uncertainty is something you don’t know but is,
Aleatory uncertainty is something you not only don’t know; it is unknowable.
Like oil and water, chance and fate do not mix. And to the extent that we allow our thoughts to move in the direction of fate, we undermine our ability to think probabilistically.
So finding meaning in events is positively correlated with well-being but negatively correlated with foresight.
The Yale professor Dan Kahan has done much research showing that our judgments about risks—Does gun control make us safer or put us in danger?—are driven less by a careful weighing of evidence than by our identities, which is why people’s views on gun control often correlate with their views on climate change, even though the two issues have no logical connection to each other. Psycho-logic trumps logic.
This suggests that superforecasters may have a surprising advantage: they’re not experts or professionals, so they have little ego invested in each forecast.
Many studies have found that those who trade more frequently get worse returns than those who lean toward old-fashioned buy-and-hold strategies.
Greek mythology, any discussion of two opposing dangers called for Scylla and Charybdis. Scylla was a rock shoal off the coast of Italy. Charybdis was a whirlpool on the coast of Sicily, not far away. Sailors knew they would be doomed if they strayed too far in either direction.
superforecasters not only update more often than other forecasters, they update in smaller increments.
What matters far more to the superforecasters than Bayes’ theorem is Bayes’ core insight of gradually getting closer to the truth by constantly updating in proportion to the weight of the evidence.
The psychologist Carol Dweck would say Simpson has a “growth mindset,” which Dweck defines as believing that your abilities are largely the product of effort—that you can “grow” to the extent that you are willing to work hard and learn.
The scans revealed that volunteers with a fixed mindset were fully engaged when they were told whether their answers were right or wrong but that’s all they apparently cared about. Information that could help improve their answers didn’t engage them.
“There is no harm in being sometimes wrong, especially if one is promptly found out,” he wrote in 1933.
Try, fail, analyze, adjust, try again: Keynes cycled through those steps ceaselessly.
“A simple analysis shows that for a given angle of unbalance the curvature of each winding is inversely proportional to the square of the speed at which the cyclist is proceeding.”
People who read the booklet benefited more from practice and people who practiced benefited more from reading the booklet. Fortune favors the prepared mind.
That’s because experience isn’t enough. It must be accompanied by clear feedback.
a result, officers grow confident faster than they grow accurate, meaning they grow increasingly overconfident.
Vague language is elastic language.
The lesson for forecasters who would judge their own vague forecasts is: don’t kid yourself.
The second big barrier to feedback is time lag.
On average, the experts recalled a number 31 percentage points higher than the correct figure. So an expert who thought there was
To get better at a certain type of forecasting, that is the type of forecasting you must do—over and over again, with good feedback telling you how your training is going,
So experts were open to “I was almost right” scenarios but rejected “I was almost wrong” alternatives. Not Devyn. In
Grit is passionate perseverance of long-term goals, even in the face of frustration and failure. Married with a growth mindset, it is a potent force for personal progress.
The strongest predictor of rising into the ranks of superforecasters is perpetual beta, the degree to which one is committed to belief updating and self-improvement.
All that said, there is another element that is missing entirely from the sketch: other people. In our private lives and our workplaces, we seldom make judgments about the future entirely in isolation. We are a social species. We decide together.
“members of any small cohesive group tend to maintain esprit de corps by unconsciously developing a number of shared illusions and related norms that interfere with critical thinking and reality testing.”3 Groups that get along too well don’t question assumptions or confront uncomfortable facts.
Prediction markets beat ordinary teams by about 20%. And superteams beat prediction markets by 15% to 30%.
A group of opinionated people who engage one another in pursuit of the truth will be more than the sum of its opinionated parts.