The Data Detective: Ten Easy Rules to Make Sense of Statistics
Rate it:
4%
Flag icon
The experimental subjects found it much easier to argue against positions they disliked than in favor of those they supported. There was a special power in doubt.
4%
Flag icon
I worry about a world in which many people will believe anything, but I worry far more about one in which people believe nothing beyond their own preconceptions.
6%
Flag icon
We often find ways to dismiss evidence that we don’t like. And the opposite is true, too: when evidence seems to support our preconceptions, we are less likely to look too closely for flaws.
6%
Flag icon
We don’t need to become emotionless processors of numerical information—just noticing our emotions and taking them into account may often be enough to improve our judgment. Rather than requiring superhuman control over our emotions, we need simply to develop good habits. Ask yourself: How does this information make me feel? Do I feel vindicated or smug? Anxious, angry, or afraid? Am I in denial, scrambling to find a reason to dismiss the claim?
7%
Flag icon
Psychologists call this “motivated reasoning.” Motivated reasoning is thinking through a topic with the aim, conscious or unconscious, of reaching a particular kind of conclusion. In a football game, we see the fouls committed by the other team but overlook the sins of our own side. We are more likely to notice what we want to notice.
8%
Flag icon
The counterintuitive result is that presenting people with a detailed and balanced account of both sides of the argument may actually push people away from the center rather than pull them in. If we already have strong opinions, then we’ll seize upon welcome evidence, but we’ll find opposing data or arguments irritating. This biased assimilation of new evidence means that the more we know, the more partisan we’re able to be on a fraught issue.
9%
Flag icon
better-informed people are actually more at risk of motivated reasoning on politically partisan topics: the more persuasively we can make the case for what our friends already believe, the more our friends will respect us.
9%
Flag icon
When you see a statistical claim, pay attention to your own reaction. If you feel outrage, triumph, denial, pause for a moment. Then reflect. You don’t need to be an emotionless robot, but you could and should think as well as feel.
12%
Flag icon
Psychologists have a name for our tendency to confuse our own perspective with something more universal: it’s called “naive realism,” the sense that we are seeing reality as it truly is, without filters or errors.
13%
Flag icon
Social scientists have long understood that statistical metrics are at their most pernicious when they are being used to control the world, rather than try to understand it. Economists tend to cite their colleague Charles Goodhart, who wrote in 1975: “Any observed statistical regularity will tend to collapse once pressure is placed upon it for control purposes.”12 (Or, more pithily: “When a measure becomes a target, it ceases to be a good measure.”)
13%
Flag icon
Muhammad Yunus, an economist, microfinance pioneer, and winner of the Nobel Peace Prize, has contrasted the “worm’s-eye view” of personal experience with the “bird’s-eye view” that statistics can provide.
18%
Flag icon
Ask what is being counted, what stories lie behind the statistics.
21%
Flag icon
Steven Pinker has argued that good news tends to unfold slowly, while bad news is often more sudden.
23%
Flag icon
If we look only at the surviving planes—falling prey to “survivorship bias”—we’ll completely misunderstand where the real vulnerabilities are.
25%
Flag icon
Scientists sometimes call this practice “HARKing”—HARK is an acronym for Hypothesizing After Results Known. To be clear, there’s nothing wrong with gathering data, poking around to find the patterns, and then constructing a hypothesis. That’s all part of science. But you then have to get new data to test the hypothesis. Testing a hypothesis using the numbers that helped form the hypothesis in the first place is not OK.
26%
Flag icon
It is easy, in Nassim Taleb’s memorable phrase, to be “fooled by randomness.”
27%
Flag icon
For researchers, it’s clear what that improvement would look like: They need to come clean about the Kickended side of research. They need to be transparent about the data that were gathered but not published, the statistical tests that were performed but then set to one side, the clinical trials that went missing in action, and the studies that produced humdrum results and were rejected by journals or stuffed in a file drawer while researchers got on with something more fruitful.
28%
Flag icon
Or you can find science journalism that explains the facts, puts them in a proper context, and when necessary speaks truth to power. If you care enough as a reader you can probably figure out the difference. It’s really not hard. Ask yourself if the journalist reporting on the research has clearly explained what’s being measured. Was this a study done with humans? Or mice? Or in a petri dish? A good reporter will be clear. Then: How large is the effect? Was this a surprise to other researchers?
29%
Flag icon
The power to not collect data is one of the most important and little-understood sources of power that governments have . . . By refusing to amass knowledge in the first place, decision-makers exert power over the rest of us. • Anna Powell-Smith, MissingNumbers.org
31%
Flag icon
Sampling error is when a randomly chosen sample doesn’t reflect the underlying population purely by chance; sampling bias is when the sample isn’t randomly chosen at all.
39%
Flag icon
Onora O’Neill argues that if we want to demonstrate trustworthiness, we need the basis of our decisions to be “intelligently open.” She proposes a checklist of four properties that intelligently open decisions should have. Information should be accessible: that implies it’s not hiding deep in some secret data vault. Decisions should be understandable—capable of being explained clearly and in plain language. Information should be usable—which may mean something as simple as making data available in a standard digital format. And decisions should be assessable—meaning that anyone with the time ...more
48%
Flag icon
Just so: pictures engage the imagination and the emotion, and are easily shared before we have time to think a little harder. If we don’t, we’re allowing ourselves to be dazzled.
54%
Flag icon
Ideally, a decision maker or a forecaster will combine the outside view and the inside view—or, similarly, statistics plus personal experience. But it’s much better to start with the statistical view, the outside view, and then modify it in the light of personal experience than it is to go the other way around. If you start with the inside view you have no real frame of reference, no sense of scale—and can easily come up with a probability that is ten times too large, or ten times too small. Second, keeping score was important. As Tetlock’s intellectual predecessors Fischhoff and Beyth had ...more
This highlight has been truncated due to consecutive passage length restrictions.
54%
Flag icon
“For superforecasters, beliefs are hypotheses to be tested, not treasures to be guarded,” wrote Philip Tetlock after the study had been completed. “It would be facile to reduce superforecasting to a bumper-sticker slogan, but if I had to, that would be it.”14 And if even that is too long for the bumper sticker, what about this: superforecasting means being willing to change your mind.
56%
Flag icon
“Making public commitments ‘freezes’ attitudes in place. So saying something dumb makes you a bit dumber. It becomes harder to correct yourself.”
56%
Flag icon
“When my information changes, I alter my conclusions. What do you do, sir?”
56%
Flag icon
Be Curious I can think of nothing an audience won’t understand. The only problem is to interest them; once they are interested they understand anything in the world. • Orson Welles1
56%
Flag icon
First, we should learn to stop and notice our emotional reaction to a claim, rather than accepting or rejecting it because of how it makes us feel. Second, we should look for ways to combine the “bird’s eye” statistical perspective with the “worm’s eye” view from personal experience. Third, we should look at the labels on the data we’re being given, and ask if we understand what’s really being described. Fourth, we should look for comparisons and context, putting any claim into perspective. Fifth, we should look behind the statistics at where they came from—and what other data might have ...more
57%
Flag icon
The philosopher Onora O’Neill once declared, “Well-placed trust grows out of active inquiry rather than blind acceptance.”
57%
Flag icon
On the most politically polluted, tribal questions, where intelligence and education fail, this trait does not. And if you’re desperately, burningly curious to know what it is—congratulations. You may be inoculated already. Curiosity breaks the relentless pattern. Specifically, Kahan identified “scientific curiosity.” That’s different from scientific literacy. The two qualities are correlated, of course, but there are curious people who know rather little about science (yet), and highly trained people with little appetite to learn more.
57%
Flag icon
the more curious we are, the less our tribalism seems to matter. (There is little correlation between scientific curiosity and political affiliation. Happily, there are plenty of curious people across the political spectrum.)
57%
Flag icon
Neuroscientific studies suggest that the brain responds in much the same anxious way to facts that threaten our preconceptions as it does to wild animals that threaten our lives.6 Yet for someone in a curious frame of mind, in contrast, a surprising claim need not provoke anxiety. It can be an engaging mystery, or a puzzle to solve.
58%
Flag icon
There’s a sweet spot for curiosity: if we know nothing, we ask no questions; if we know everything, we ask no questions either. Curiosity is fueled once we know enough to know that we do not know.
58%
Flag icon
It’s a rather beautiful discovery: in a world where so many people seem to hold extreme views with strident certainty, you can deflate somebody’s overconfidence and moderate their politics simply by asking them to explain the details.