More on this book
Community
Kindle Notes & Highlights
Read between
October 28 - November 30, 2021
Let’s go back to the cancer example. The prevalence of the disease in the population, 1 percent, is how we set our priors: prob(Hypothesis) = .01. The sensitivity of the test is the likelihood of getting a positive result given that the patient has the disease: prob(Data | Hypothesis) = .9. The marginal probability of a positive test result across the board is the sum of the probabilities of a hit for the sick patients (90 percent of the 1 percent, or .009) and of a false alarm for the healthy ones (9 percent of the 99 percent, or .0891), or .0981,
Kahneman and Tversky singled out a major ineptitude in our Bayesian reasoning: we neglect the base rate, which is usually the best estimate of the prior probability.5 In the medical diagnosis problem, our heads are turned by the positive test result (the likelihood) and we forget about how rare the disease is in the population (the prior).
One of the symptoms of base-rate neglect in the world is hypochondria. Who among us hasn’t worried we have Alzheimer’s after a memory lapse, or an exotic cancer when we have an ache or pain? Another is medical scaremongering.
Once she collected herself, she thought it through like a Bayesian, realized that twitches are common and Tourette’s rare, and calmed back down
Penelope is unlikely, a priori, to be an art history major. But in our mind’s eye she is representative of an art history major, and the stereotype crowds out the base rates.
a less-than-perfect test for a rare trait will mainly turn out false positives. The heart of the problem is that only a tiny proportion of the population are thieves, suicides, terrorists, or rampage shooters (the base rate).
The judges, falling short of omniscience, cannot be guaranteed to appreciate our virtues. Remembering the base rates—the sheer number of competitors—can take some of the sting out of a rejection.
prior credence is simply the fallible knowledge accumulated from all our experience in the past. Indeed, the posterior probability from one round of looking at evidence can supply the prior probability for the next round, a cycle called Bayesian updating. It’s simply the mindset of someone who wasn’t born yesterday.
Hume’s famous argument against miracles is thoroughly Bayesian:11 Nothing is esteemed a miracle, if it ever happen in the common course of nature.
In other words, miracles such as resurrection must be given a low prior probability. Here is the zinger: No testimony is sufficient to establish a miracle, unless the testimony be of such a kind, that its falsehood would be more miraculous, than the fact, which it endeavors to establish.
its “falsehood” is the likelihood of the data given no miracle: the possibility that the witness lied, misperceived, misremembered, embellished, or passed along a tall tale he heard from someone else. Given everything we know about human behavior, that’s far from miraculous! Which is to say, its likelihood is higher than the prior probability of a miracle.
Another way of putting it is this: Which is more likely—that the laws of the universe as we understand them are false, or that some guy got something wrong?
Carl Sagan (1934–1996) in the slogan that serves as this chapter’s epigraph: “Extraordinary claims require extraordinary evidence.”
actual physicists, like Sean Carroll in his book The Big Picture, have explained why the laws of physics really do rule out precognition and other forms of ESP.
A big problem is that many of the phenomena that biomedical researchers hunt for are interesting and a priori unlikely to be true,
while many true findings, including successful replication attempts and null results, are considered too boring to publish.
As the physicist John Ziman noted in 1978, “The physics of undergraduate text-books is 90% true; the contents of the primary research journals of physics is 90% false.”18 It’s a reminder that Bayesian reasoning recommends against the common practice of using “textbook” as an insult and “scientific revolution” as a compliment.
Measure any socially significant variable: test scores, vocational interests, social trust, income, marriage rates, life habits, rates of different types of violence
Now break down the results by the standard demographic dividers: age, sex, race, religion, ethnicity. The averages for the different subgroups are never the same, and sometimes the differences are large. Whether the differences arise from nature, culture, discrimination, history, or some combination is beside the point: the differences are there.
If you were a good Bayesian, you’d start with the base rate for that person’s age, sex, class, race, ethnicity, and religion, and adjust by the person’s particulars. In other words, you’d engage in profiling. You would perpetrate prejudice not out of ignorance, hatred, supremacy, or any of the -isms or -phobias, but from an objective effort to make the most accurate prediction.
If the sex ratio in a professional field is not 50–50, does that prove its gatekeepers are trying to keep women out, or might there be a difference in the base rate of women trying to get in?
If mortgage lenders turn down minority applicants at higher rates, are they racist, or might they, like the hypothetical executive in Tetlock’s study, be using base rates for defaulting from different neighborhoods that just happen to correlate with race?
Race, sex, ethnicity, religion, and sexual orientation have become war zones in intellectual life, even as overt bigotry of all kinds has dwindled.
In reality, a statistical formula is only as good as the assumptions behind it.
For the prior, should I use the base rate for prostate cancer in the population? Among white Americans? Ashkenazi Jews? Ashkenazi Jews over sixty-five? Ashkenazi Jews over sixty-five who exercise and have no family history?
Another problem with using a base rate as the prior is that base rates can change, and sometimes quickly.
The theory of rational choice goes back to the dawn of probability theory and the famous argument by Blaise Pascal (1623–1662) on why you should believe in God: if you did and he doesn’t exist, you would just have wasted some prayers, whereas if you didn’t and he does exist, you would incur his eternal wrath.
Unlike the pope, von Neumann really might have been a space alien—his colleagues wondered about it because of his otherworldly intelligence. He also invented game theory (chapter 8), the digital computer, self-replicating machines, quantum logic, and key components of nuclear weapons,
Rational choice is not a psychological theory of how human beings choose, or a normative theory of what they ought to choose, but a theory of what makes choices consistent with the chooser’s values and each other.
A Theory of Rational Choice The first axiom may be called Commensurability: for any options A and B, the decider prefers A, or prefers B, or is indifferent between them.
The second axiom, Transitivity, is more interesting. When you compare options two at a time, if you prefer A to B, and B to C, then you must prefer A to C.
The third is called Closure. With God playing dice and all that, choices are not always among certainties, like picking an ice cream flavor, but may include a collection of possibilities with different odds, like picking a lottery ticket.
The theory of rational choice is a theory of decision making with known unknowns: with risk, not necessarily uncertainty.
The fifth axiom, Independence, is also interesting. If you prefer A to B, then you also prefer a lottery with A and C as the payouts to a lottery with B and C as the payouts
Independence from Irrelevant Alternatives, as the generic version of Independence is called, is a requirement that shows up in many theories of rational choice.8 A simpler version says that if you prefer A to B when choosing between them, you should still prefer A to B when choosing among them and a third alternative, C.
The sixth is Consistency: if you prefer A to B, then you prefer a gamble in which you have some chance at getting A, your first choice, and otherwise get B, to the certainty of settling for B. Half a chance is better than none.
A rational chooser is a utility maximizer, and vice versa.
Utility is not the same as self-interest; it’s whatever scale of value a rational decider consistently maximizes. If people make sacrifices for their children and friends, if they minister to the sick and give alms to the poor, if they return a wallet filled with money, that shows that love and charity and honesty go into their utility scale.
On a normal day, a terrorist attack or an incident of food poisoning with a dozen victims can get wall-to-wall coverage. But in the midst of a war or pandemic, a thousand lives lost in a day is taken in stride—even though each of those lives, unlike a diminishing dollar, was a real person, a sentient being who loved and was loved.
Kahneman and Tversky conclude that people are not risk-averse across the board, though they are loss-averse: they seek risk if it may avoid a loss.29
It doesn’t take much imagination to see how these framings could be exploited to manipulate people, though they can be avoided with careful presentations of the data, such as always mentioning both the gains and the losses, or displaying them as graphs.
Prospect theory.33 It is an alternative to rational choice theory, intended to describe how people do choose rather than prescribe how they ought to choose.
each additional unit gained or lost counts for less than the ones already incurred—but the slope is steeper on the downside; a loss is more than twice as painful as the equivalent gain is pleasurable.
It stands to reason that we are more vigilant about what we have to lose, and take chances to avoid precipitous plunges in well-being.
For every thousand women who undergo annual ultrasound exams for ovarian cancer, 6 are correctly diagnosed with the disease, compared with 5 in a thousand unscreened women—and the number of deaths in the two groups is the same, 3. So much for the benefits. What about the costs? Out of the thousand who are screened, another 94 get terrifying false alarms, 31 of whom suffer unnecessary removal of their ovaries, of whom 5 have serious complications to boot. The number of false alarms and unnecessary surgeries among women who are not screened, of course, is zero. It doesn’t take a lot of math to
...more
As for betting your life: Have you ever saved a minute on the road by driving over the speed limit, or indulged your impatience by checking your new texts while crossing the street? If you weighed the benefits against the chance of an accident multiplied by the price you put on your life, which way would it go? And if you don’t think this way, can you call yourself rational?
Rationality requires that we distinguish what is true from what we want to be true—that
Signal Detection Theory or statistical decision theory. It combines the big ideas of the two preceding chapters: estimating the probability that something is true of the world (Bayesian reasoning) and deciding what to do about it by weighing its expected costs and benefits (rational choice).
The signal detection challenge is whether to treat some indicator as a genuine signal from the world or as noise in our imperfect perception of it.
The output of statistical decision theory is not a degree of credence but an actionable decision: to have surgery or not, to convict or acquit. In coming down on one side or the other, we are not deciding what to believe about the state of the world. We’re committing to an action in expectation of its likely costs and benefits.

