More on this book
Community
Kindle Notes & Highlights
Read between
October 28 - November 30, 2021
Illusory correlation, as this fallacy is called, was first shown in a famous set of experiments by the psychologists Loren and Jean Chapman, who wondered why so many psychotherapists still used the Rorschach inkblot and Draw-a-Person tests even though every study that had ever tried to validate them showed no correlation between responses on the tests and psychological symptoms.
A bored law student, Tyler Vigen, wrote a program that scrapes the web for datasets with meaningless correlations just to show how prevalent they are. The number of murders by steam or hot objects, for example, correlates highly with the age of the reigning Miss America. And the divorce rate in Maine closely tracks national consumption of margarine.
The term originally referred to a specific phenomenon that comes along with correlation, regression to the mean.
He found that “when mid-parents are taller than mediocrity, their children tend to be shorter than they. When mid-parents are shorter than mediocrity, their children tend to be taller than they.”
The reason that populations don’t collapse into uniform mediocrity, despite regression to the mean, is that the tails of the distribution are constantly being replenished by the occasional very tall child of taller-than-average parents and very short child of shorter-than-average ones.
Regression to the mean is purely a statistical phenomenon, a consequence of the fact that in bell-shaped distributions, the more extreme a value, the less likely it is to turn up.
Regression to the mean happens whenever two variables are imperfectly correlated, which means that we have a lifetime of experience with it.
Instead, they come up with fallacious causal explanations for what in fact is a statistical inevitability. A tragic example is the illusion that criticism works better than praise, and punishment better than reward.11 We criticize students when they perform badly. But whatever bad luck cursed that performance is unlikely to be repeated in the next attempt, so they’re bound to improve, tricking us into thinking that punishment works.
But if an athlete is singled out for an extraordinary week or year, the stars are unlikely to align that way twice in a row, and he or she has nowhere to go but meanward. (Equally meaninglessly, a slumping team will improve after the coach is fired.)
Hume, once again, set the terms for centuries of analysis by venturing that causation is merely an expectation that a correlation we experienced in the past will hold in the future.
Likewise thunder often precedes a forest fire, but we don’t say thunder causes fires. These are epiphenomena, also known as confounds or nuisance variables: they accompany but do not cause the event. Epiphenomena are the bane of epidemiology.
Hume anticipated the problem and elaborated on his theory: not only does the cause have to regularly precede its effect, but “if the first object had not been, the second never had existed.”
In a parallel universe in which the cause didn’t happen, neither did the effect. This counterfactual definition of causation solves the epiphenomenon problem.
Causation, then, can be thought of as the difference between outcomes when an event (the cause) takes place and when it does not.
We can, to be sure, compare the outcomes in this universe on the various occasions when that kind of event does or does not take place. But that runs smack into a problem pointed out by Heraclitus in the sixth century BCE: You can’t step in the same river twice. Between those two occasions, the world may have changed in other ways, and you can’t be sure whether one of those other changes was the cause.
Every individual is unique, so we can’t know whether an outcome experienced by an individual depended on the supposed cause or on that person’s myriad idiosyncrasies.
We connect the cause to its effect with a mechanism: the clockwork behind the scenes that pushes things around.
One is the elusive difference between a cause and a condition. We say that striking a match causes a fire, because without the striking there would be no fire. But without oxygen, without the dryness of the paper, without the stillness of the room, there also would be no fire. So why don’t we say “The oxygen caused the fire”?
no event has a single cause. Events are embedded in a network of causes that trigger, enable, inhibit, prevent, and supercharge one another in linked and branching pathways.
If you interpret the arrows not as logical implications (“If X smokes, then X gets heart disease”) but as conditional probabilities (“The likelihood of X getting heart disease given that X is a smoker is higher than the likelihood of X getting heart disease given that he is not a smoker”),
The inventor of these networks, the computer scientist Judea Pearl, notes that they are built out of three simple patterns—the chain, the fork, and the collider—each capturing a fundamental (but unintuitive) feature of causation with more than one cause.
In a causal chain, the first cause, A, is “screened off” from the ultimate effect, C; its only influence is via B. As far as C is concerned, A might as well not exist. Consider
A causal fork is already familiar: it depicts a confound or epiphenomenon, with the attendant danger of misidentifying the real cause. Age (B) affects vocabulary (A) and shoe size (C), since older children have bigger feet and know more words. This means that vocabulary is correlated with shoe size.
Just as dangerous is the collider, where unrelated causes converge on a single effect.
Countries that are richer also tend to be healthier, happier, safer, better educated, less polluted, more peaceful, more democratic, more liberal, more secular, and more gender-egalitarian.22 People who are richer also tend to be healthier, better educated, better connected, likelier to exercise and eat well, and likelier to belong to privileged groups.
There is an impeccable way to cut these knots: the randomized experiment, often called a randomized controlled trial or RCT. Take a large sample from the population of interest, randomly divide them into two groups, apply the putative cause to one group and withhold it from the other, and see if the first group changes while the second does not.
Randomness is the key: if the patients who were given the drug signed up earlier, or lived closer to the hospital, or had more interesting symptoms, than the patients who were given the placebo, you’ll never know whether the drug worked.
The other problem with experimental manipulations, of course, is that the world is not a laboratory. It’s not as if political scientists can flip a coin, impose democracy on some countries and autocracy on others, and wait five years to see which ones go to war. The same practical and ethical problems apply to studies of individuals,
events have more than one cause, all of them statistical.
Perhaps everyone benefits from practice, but talented people benefit more. What we need is a vocabulary for talking and thinking about multiple causes.
smarter players gain more with every additional game of practice. An equivalent way of putting it is that without practice, cognitive ability barely matters (the leftmost tips of the lines almost overlap), but with practice, smarter players show off their talent (the rightmost tips are spread apart). Knowing the difference between main effects and interactions not only protects us from falling for false dichotomies but offers us deeper insight into the nature of the underlying causes.
Who is the more accurate prognosticator, the expert or the equation? The winner, almost every time, is the equation. In fact, an expert who is given the equation and allowed to use it to supplement his or her judgment often does worse than the equation alone. The reason is that experts are too quick to see extenuating circumstances that they think render the formula inapplicable.
while the human expert is far too impressed with the eye-catching particulars and too quick to throw the base rates out the window. Indeed, some of the predictors that human experts rely on the most, such as face-to-face interviews, are revealed by regression analyses to be perfectly useless.
A person still is indispensable in supplying predictors that require real comprehension, like understanding language and categorizing behavior. It’s just that a human is inept at combining them, whereas that is a regression algorithm’s stock in trade.
For all the power of a regression equation, the most humbling discovery about predicting human behavior is how unpredictable it is.
Tell people there’s an invisible man in the sky who created the universe, and the vast majority will believe you. Tell them the paint is wet, and they have to touch it to be sure. —George Carlin
Shortly before the announcements of the vaccines, a third of Americans said they would reject them, part of an anti-vax movement that opposes the most benevolent invention in the history of our species.2 Covid quackery has been endorsed by celebrities, politicians, and, disturbingly, the most powerful person on earth at the time of the pandemic, US president Donald Trump.
He predicted in February 2020 that Covid-19 would disappear “like a miracle,” and endorsed quack cures like malaria drugs, bleach injections, and light probes. He disdained basic public health measures like masks and distancing, even after he himself was stricken, inspiring millions of Americans to flout the measures and amplifying the toll of death and financial hardship.3 It was all part of a larger rejection of the norms of reason and science. Trump told around thirty thousand lies during his term, had a press secretary who touted “alternative facts,” claimed that climate change was a
...more
He repeatedly publicized QAnon, the millions-strong conspiracy cult that credits him with combating a cabal of Satan-worshiping pedophiles embedded in the American “deep state.”
three quarters of Americans hold at least one paranormal belief. Here are some figures from the first decade of our century:7 Possession by the devil, 42 percent Extrasensory perception, 41 percent Ghosts and spirits, 32 percent Astrology, 25 percent Witches, 21 percent Communicating with the dead, 29 percent Reincarnation, 24 percent Spiritual energy in mountains, trees, and crystals, 26 percent Evil eye, curses, spells, 16 percent Consulted a fortune-teller or psychic, 15 percent
To be sure, many superstitions originate in overinterpreting coincidences, failing to calibrate evidence against priors, overgeneralizing from anecdotes, and leaping from correlation to causation. A prime example is the misconception that vaccines cause autism, reinforced by the observation that autistic symptoms appear, coincidentally, around the age at which children are first inoculated. And all of them represent failures of critical thinking and of the grounding of belief in evidence;
Conspiracy theories and viral falsehoods are probably as old as language.10 What are the accounts of miracles in scriptures, after all, but fake news about paranormal phenomena?
Social media may indeed be accelerating their spread, but the appetite for florid fantasies lies deep in human nature:
And for all the panic that fake news has sown, its political impact is slight: it titillates a faction of partisans rather than swaying a mass of undecideds.
To understand popular delusions and the madness of crowds, we have to examine cognitive faculties that work well in some environments and for some purposes but that go awry when applied at scale, in novel circumstances, or in the service of other goals.
As Upton Sinclair pointed out, “It is difficult to get a man to understand something, when his salary depends upon his not understanding it.”
The mustering of rhetorical resources to drive an argument toward a favored conclusion is called motivated reasoning.
people seek out arguments that ratify their beliefs and shield themselves from those that might disconfirm them.
In biased evaluation, we deploy our ingenuity to upvote the arguments that support our position and pick nits in the ones that refute it.
So much of our reasoning seems tailored to winning arguments that some cognitive scientists, like Hugo Mercier and Dan Sperber, believe it is the adaptive function of reasoning.23 We evolved not as intuitive scientists but as intuitive lawyers.

