More on this book
Community
Kindle Notes & Highlights
by
Tim Harford
Read between
June 25 - July 13, 2022
Doubt is also easy to sell because it is a part of the process of scientific exploration and debate.
I worry about a world in which many people will believe anything, but I worry far more about one in which people believe nothing beyond their own preconceptions.
Motlatsi Kgaphola liked this
Doubt is a powerful weapon, and statistics are a vulnerable target.
We often find ways to dismiss evidence that we don’t like. And the opposite is true, too: when evidence seems to support our preconceptions, we are less likely to look too closely for flaws. The more extreme the emotional reaction, the harder it is to think straight.
It is not easy to master our emotions while assessing information that matters to us, not least because our emotions can lead us astray in different directions.
We don’t need to become emotionless processors of numerical information—just noticing our emotions and taking them into account may often be enough to improve our judgment. Rather than requiring superhuman control over our emotions, we need simply to develop good habits.
Before I repeat any statistical claim, I first try to take note of how it makes me feel. It’s not a foolproof method against tricking myself, but it’s a habit that does little harm and is sometimes a great deal of help. Our emotions are powerful. We can’t make them vanish, nor should we want to. But we can, and should, try to notice when they are clouding our judgment.
Motivated reasoning is thinking through a topic with the aim, conscious or unconscious, of reaching a particular kind of conclusion.
Wishful thinking isn’t the only form of motivated reasoning, but it is a common one. We believe in part because we want to.
The counterintuitive result is that presenting people with a detailed and balanced account of both sides of the argument may actually push people away from the center rather than pull them in. If we already have strong opinions, then we’ll seize upon welcome evidence, but we’ll find opposing data or arguments irritating. This biased assimilation of new evidence means that the more we know, the more partisan we’re able to be on a fraught issue.
Psychologists have a name for our tendency to confuse our own perspective with something more universal: it’s called “naive realism,” the sense that we are seeing reality as it truly is, without filters or errors.[9] Naive realism can lead us badly astray when we confuse our personal perspective on the world with some universal truth.
The Nobel laureate economist Friedrich Hayek had a phrase for the kind of awareness that is hard to capture in metrics and maps: the “knowledge of the particular circumstances of time and place.”
Social scientists have long understood that statistical metrics are at their most pernicious when they are being used to control the world, rather than try to understand it.
“When a measure becomes a target, it ceases to be a good measure.”)
Donald T. Campbell, who around the same time explained: “The more any quantitative social indicator is used for social decision-making, the more subject it will be to corruption pressures and the more apt it will be to distort and corrupt the social processes it is intended to monitor.”
a statistical metric may be a pretty decent proxy for something that really matters, but it is almost always a proxy rather than the real thing. Once you start using that proxy as a target to be improved, or a metric to control others at a distance, it will be distorted, faked, or undermined. The value of the measure will evaporate.
I shouldn’t need to favor either fast or slow statistics; the deepest understanding comes from melding them together.
Muhammad Yunus, an economist, microfinance pioneer, and winner of the Nobel Peace Prize, has contrasted the “worm’s-eye view” of personal experience with the “bird’s-eye view” that statistics can provide.
There is no easy answer to the balance between the bird’s-eye view and the worm’s-eye view, between the broad and rigorous but dry insight we get from the numbers and the rich but parochial lessons we learn from experience. We must simply keep reminding ourselves what we’re learning and what we might be missing.
Much of what we think of as cultural differences turn out to be differences in income.
Motlatsi Kgaphola liked this
try to take both perspectives—the worm’s-eye view as well as the bird’s-eye view. They will usually show you something different, and they will sometimes pose a puzzle: How could both views be true? That should be the beginning of an investigation. Sometimes the statistics will be misleading, sometimes it will be our own eyes that deceive us, and sometimes the apparent contradiction can be resolved once we get a handle on what is happening. Often
Often, looking for an explanation really means looking for someone to blame.
it is important to understand what is being measured or counted, and how.
the psychologist Steven Pinker calls the “curse of knowledge” is a constant obstacle to clear communication: once you know a subject fairly well, it is enormously difficult to put yourself in the position of someone who doesn’t know it.
Motlatsi Kgaphola liked this
Premature enumeration is not just an intellectual failure. Not asking what a statistic actually means is a failure of empathy, too.
The solution, then: Ask what is being counted, what stories lie behind the statistics.
A more plausible explanation is that we are drawn to surprising news, and surprising news is more often bad than good.
the psychologist Steven Pinker has argued that good news tends to unfold slowly, while bad news is often more sudden.
when media outlets want to grab our attention, they look for stories that are novel and unexpected over a short time horizon—and these stories are more likely to be bad than good.
For obvious reasons, this particular flavor of survivorship bias is called “publication bias.” Interesting findings are published; non-findings, or failures to replicate previous findings, face a higher publication hurdle.
So not only are journals predisposed to publish surprising results, researchers facing “publish or perish” incentives are more likely to submit surprising results that may not stand up to scrutiny.
HARK is an acronym for Hypothesizing After Results Known. To be clear, there’s nothing wrong with gathering data, poking around to find the patterns, and then constructing a hypothesis. That’s all part of science. But you then have to get new data to test the hypothesis.
The famous psychological results are famous not because they are the most rigorously demonstrated, but because they’re interesting. Fluke results are far more likely to be surprising, and so far more likely to hit that Goldilocks level of counterintuitiveness (not too absurd, but not too predictable) that makes them so fascinating. The “interestingness” filter is enormously powerful.
They call themselves the Cochrane Collaboration and they maintain the Cochrane Library, an online database of systematic research reviews.
Thanks to the Cochrane summary we no longer have to guess if there’s a pile of important evidence that we simply weren’t told about.
A related network, the Campbell Collaboration, aims to do the same thing for social policy questions in areas such as education and criminal justice.
Does conformity vary in its power depending on who is under pressure to conform to whom?
All this suggests that one cure for conformity is to make decisions with a diverse group of people, people who are likely to bring different ideas and assumptions to the table.
Motlatsi Kgaphola liked this
Caroline Criado Perez about her book Invisible Women.
Her book argues that all too often, the people responsible for the products and policies that shape our lives implicitly view the default customer—or citizen—as male.
any dataset begins with somebody deciding to collect the numbers. What numbers are and aren’t collected, what is and isn’t measured, and who is included or excluded are the result of all-too-human assumptions, preconceptions, and oversights.
The United Nations, for example, has embraced a series of ambitious “Sustainable Development Goals” for 2030. But development experts are starting to call attention to a problem: we often don’t have the data we would need to figure out whether those goals have been met.
But bigger isn’t always better. It’s perfectly possible to reach vast numbers of people while still missing out on enough other people to get a disastrously skewed impression of what’s really going on.
Sample error reflects the risk that, purely by chance, a randomly chosen sample of opinions does not reflect the true views of the population.
Sampling error is when a randomly chosen sample doesn’t reflect the underlying population purely by chance; sampling bias is when the sample isn’t randomly chosen at all.
If algorithms are shown a skewed sample of the world, they will reach a skewed conclusion.
There are some overtly racist and sexist people out there—look around—but in general what we count and what we fail to count is often the result of an unexamined choice, of subtle biases and hidden assumptions that we haven’t realized are leading us astray.
we can and should remember to ask who or what might be missing from the data we’re being told about.
Researchers may not be explicit that an experiment studied only men—such information is sometimes buried in a statistical appendix, and sometimes not reported at all. But often a quick investigation will reveal that the study has a blind spot.