More on this book
Community
Kindle Notes & Highlights
by
Tim Harford
Read between
June 24 - June 24, 2021
The experimental subjects found it much easier to argue against positions they disliked than in favor of those they supported. There was a special power in doubt.
I worry about a world in which many people will believe anything, but I worry far more about one in which people believe nothing beyond their own preconceptions.
We often find ways to dismiss evidence that we don’t like. And the opposite is true, too: when evidence seems to support our preconceptions, we are less likely to look too closely for flaws.
We don’t need to become emotionless processors of numerical information—just noticing our emotions and taking them into account may often be enough to improve our judgment. Rather than requiring superhuman control over our emotions, we need simply to develop good habits. Ask yourself: How does this information make me feel? Do I feel vindicated or smug? Anxious, angry, or afraid? Am I in denial, scrambling to find a reason to dismiss the claim?
Psychologists call this “motivated reasoning.” Motivated reasoning is thinking through a topic with the aim, conscious or unconscious, of reaching a particular kind of conclusion. In a football game, we see the fouls committed by the other team but overlook the sins of our own side. We are more likely to notice what we want to notice.
The counterintuitive result is that presenting people with a detailed and balanced account of both sides of the argument may actually push people away from the center rather than pull them in. If we already have strong opinions, then we’ll seize upon welcome evidence, but we’ll find opposing data or arguments irritating. This biased assimilation of new evidence means that the more we know, the more partisan we’re able to be on a fraught issue.
better-informed people are actually more at risk of motivated reasoning on politically partisan topics: the more persuasively we can make the case for what our friends already believe, the more our friends will respect us.
When you see a statistical claim, pay attention to your own reaction. If you feel outrage, triumph, denial, pause for a moment. Then reflect. You don’t need to be an emotionless robot, but you could and should think as well as feel.
Psychologists have a name for our tendency to confuse our own perspective with something more universal: it’s called “naive realism,” the sense that we are seeing reality as it truly is, without filters or errors.
Social scientists have long understood that statistical metrics are at their most pernicious when they are being used to control the world, rather than try to understand it. Economists tend to cite their colleague Charles Goodhart, who wrote in 1975: “Any observed statistical regularity will tend to collapse once pressure is placed upon it for control purposes.”12 (Or, more pithily: “When a measure becomes a target, it ceases to be a good measure.”)
Muhammad Yunus, an economist, microfinance pioneer, and winner of the Nobel Peace Prize, has contrasted the “worm’s-eye view” of personal experience with the “bird’s-eye view” that statistics can provide.
Ask what is being counted, what stories lie behind the statistics.
Steven Pinker has argued that good news tends to unfold slowly, while bad news is often more sudden.
If we look only at the surviving planes—falling prey to “survivorship bias”—we’ll completely misunderstand where the real vulnerabilities are.
Scientists sometimes call this practice “HARKing”—HARK is an acronym for Hypothesizing After Results Known. To be clear, there’s nothing wrong with gathering data, poking around to find the patterns, and then constructing a hypothesis. That’s all part of science. But you then have to get new data to test the hypothesis. Testing a hypothesis using the numbers that helped form the hypothesis in the first place is not OK.
It is easy, in Nassim Taleb’s memorable phrase, to be “fooled by randomness.”
For researchers, it’s clear what that improvement would look like: They need to come clean about the Kickended side of research. They need to be transparent about the data that were gathered but not published, the statistical tests that were performed but then set to one side, the clinical trials that went missing in action, and the studies that produced humdrum results and were rejected by journals or stuffed in a file drawer while researchers got on with something more fruitful.
Or you can find science journalism that explains the facts, puts them in a proper context, and when necessary speaks truth to power. If you care enough as a reader you can probably figure out the difference. It’s really not hard. Ask yourself if the journalist reporting on the research has clearly explained what’s being measured. Was this a study done with humans? Or mice? Or in a petri dish? A good reporter will be clear. Then: How large is the effect? Was this a surprise to other researchers?
The power to not collect data is one of the most important and little-understood sources of power that governments have . . . By refusing to amass knowledge in the first place, decision-makers exert power over the rest of us. • Anna Powell-Smith, MissingNumbers.org
Sampling error is when a randomly chosen sample doesn’t reflect the underlying population purely by chance; sampling bias is when the sample isn’t randomly chosen at all.
Onora O’Neill argues that if we want to demonstrate trustworthiness, we need the basis of our decisions to be “intelligently open.” She proposes a checklist of four properties that intelligently open decisions should have. Information should be accessible: that implies it’s not hiding deep in some secret data vault. Decisions should be understandable—capable of being explained clearly and in plain language. Information should be usable—which may mean something as simple as making data available in a standard digital format. And decisions should be assessable—meaning that anyone with the time
...more
Just so: pictures engage the imagination and the emotion, and are easily shared before we have time to think a little harder. If we don’t, we’re allowing ourselves to be dazzled.
Ideally, a decision maker or a forecaster will combine the outside view and the inside view—or, similarly, statistics plus personal experience. But it’s much better to start with the statistical view, the outside view, and then modify it in the light of personal experience than it is to go the other way around. If you start with the inside view you have no real frame of reference, no sense of scale—and can easily come up with a probability that is ten times too large, or ten times too small. Second, keeping score was important. As Tetlock’s intellectual predecessors Fischhoff and Beyth had
...more
This highlight has been truncated due to consecutive passage length restrictions.
“For superforecasters, beliefs are hypotheses to be tested, not treasures to be guarded,” wrote Philip Tetlock after the study had been completed. “It would be facile to reduce superforecasting to a bumper-sticker slogan, but if I had to, that would be it.”14 And if even that is too long for the bumper sticker, what about this: superforecasting means being willing to change your mind.
“Making public commitments ‘freezes’ attitudes in place. So saying something dumb makes you a bit dumber. It becomes harder to correct yourself.”
“When my information changes, I alter my conclusions. What do you do, sir?”
Be Curious I can think of nothing an audience won’t understand. The only problem is to interest them; once they are interested they understand anything in the world. • Orson Welles1
First, we should learn to stop and notice our emotional reaction to a claim, rather than accepting or rejecting it because of how it makes us feel. Second, we should look for ways to combine the “bird’s eye” statistical perspective with the “worm’s eye” view from personal experience. Third, we should look at the labels on the data we’re being given, and ask if we understand what’s really being described. Fourth, we should look for comparisons and context, putting any claim into perspective. Fifth, we should look behind the statistics at where they came from—and what other data might have
...more
The philosopher Onora O’Neill once declared, “Well-placed trust grows out of active inquiry rather than blind acceptance.”
On the most politically polluted, tribal questions, where intelligence and education fail, this trait does not. And if you’re desperately, burningly curious to know what it is—congratulations. You may be inoculated already. Curiosity breaks the relentless pattern. Specifically, Kahan identified “scientific curiosity.” That’s different from scientific literacy. The two qualities are correlated, of course, but there are curious people who know rather little about science (yet), and highly trained people with little appetite to learn more.
the more curious we are, the less our tribalism seems to matter. (There is little correlation between scientific curiosity and political affiliation. Happily, there are plenty of curious people across the political spectrum.)
Neuroscientific studies suggest that the brain responds in much the same anxious way to facts that threaten our preconceptions as it does to wild animals that threaten our lives.6 Yet for someone in a curious frame of mind, in contrast, a surprising claim need not provoke anxiety. It can be an engaging mystery, or a puzzle to solve.
There’s a sweet spot for curiosity: if we know nothing, we ask no questions; if we know everything, we ask no questions either. Curiosity is fueled once we know enough to know that we do not know.
It’s a rather beautiful discovery: in a world where so many people seem to hold extreme views with strident certainty, you can deflate somebody’s overconfidence and moderate their politics simply by asking them to explain the details.