More on this book
Community
Kindle Notes & Highlights
by
Tim Harford
Read between
August 19 - September 11, 2022
one of the central claims of my book, which is that the statistics matter.
Numbers can be used to win political arguments or to sell toothpaste, but their most important function is to help us understand the world around us. If we scorn them, or take them for granted, we are flying blind. People die as a result.
our emotions, our preconceptions, and our political affiliations are capable of badly warping the way we interpret the evidence.
Second, political decisions shape what statistics we gather and share—and what gets ignored or concealed.
the third point, which I had been wanting to make all along: Statistics show us things we cannot see in any other way.
Of course, we shouldn’t be credulous—yet the antidote to credulity isn’t to believe nothing, but to have the confidence to assess information with curiosity and a healthy skepticism.
I want to convince you that statistics can be used to illuminate reality with clarity and honesty.
sometimes the problem is not that we are too eager to believe something, but that we find reasons not to believe anything.
Much as many smokers would like to keep smoking, many of us are fondly attached to our gut instincts on political questions. All politicians need to do is persuade us to doubt evidence that would challenge those instincts.
worry about a world in which many people will believe anything, but I worry far more about one in which people believe nothing beyond their own preconceptions.
Yes, it’s easy to lie with statistics—but it’s even easier to lie without them.
Whatever we’re trying to understand about the world, one another, and ourselves, we won’t get far without statistics—any more than we can hope to examine bones without an X-ray, bacteria without a microscope, or the heavens without a telescope.
when it comes to interpreting the world around us, we need to realize that our feelings can trump our expertise.
We don’t need to become emotionless processors of numerical information—just noticing our emotions and taking them into account may often be enough to improve our judgment. Rather than requiring superhuman control over our emotions, we need simply to develop good habits. Ask yourself: How does this information make me feel? Do I feel vindicated or smug? Anxious, angry, or afraid? Am I in denial, scrambling to find a reason to dismiss the claim?
Van Meegeren set a trap into which only a true expert could stumble.
people with deeper expertise are better equipped to spot deception, but if they fall into the trap of motivated reasoning, they are able to muster more reasons to believe whatever they really wish to believe.
presenting people with a detailed and balanced account of both sides of the argument may actually push people away from the center rather than pull them in. If we already have strong opinions, then we’ll seize upon welcome evidence, but we’ll find opposing data or arguments irritating. This biased assimilation of new evidence means that the more we know, the more partisan we’re able to be on a fraught issue.
The more we get into the habit of counting to three and noticing our knee-jerk reactions, the closer to the truth we are likely to get.
Our personal experiences should not be dismissed along with our feelings, at least not without further thought. Sometimes the statistics give us a vastly better way to understand the world; sometimes they mislead us. We need to be wise enough to figure out when the statistics are in conflict with everyday experience—and in those cases, which to believe.
Sometimes personal experience tells us one thing, the statistics tell us something quite different, and both are true.
The world is full of patterns that are too subtle or too rare to detect by eyeballing them, and a pattern doesn’t need to be very subtle or rare to be hard to spot without a statistical lens.
Psychologists have a name for our tendency to confuse our own perspective with something more universal: it’s called “naive realism,” the sense that we are seeing reality as it truly is, without filters or errors.
“When a measure becomes a target, it ceases to be a good measure.”)
a statistical metric may be a pretty decent proxy for something that really matters, but it is almost always a proxy rather than the real thing. Once you start using that proxy as a target to be improved, or a metric to control others at a distance, it will be distorted, faked, or undermined. The value of the measure will evaporate.
Often, looking for an explanation really means looking for someone to blame.
Not asking what a statistic actually means is a failure of empathy,
when we stop thinking and start feeling, ludicrous errors show up very promptly.
being clear about what’s being measured, and how.
premature enumeration—rushing to work with the numbers before we really understand what those numbers are supposed to mean—it’s
Andrew Elliott—an entrepreneur who likes the question so much he published a book with the title Is That a Big Number?—suggests that we should all carry a few “landmark numbers” in our heads to allow easy comparison.
The population of the United States is 325 million. The population of the United Kingdom is 65 million. The population of the world is 7.5 billion. Name any particular age (under the age of sixty). There are about 800,000 people of that age in the UK. If a policy involves all three-year-olds, for example, there are 800,000 of them. In the United States, there are about 4 million people of any particular age (under the age of sixty). Distance around the Earth: 40,000 kilometers, or 25,000 miles. It varies depending on whether you go around the poles or around the equator, but not much. The
...more
The splash of a daily newspaper, the lead story on a TV bulletin, and the top item on a website will all focus on the most dramatic, engaging, and significant events, since the typical news consumer will last have checked in a few hours previously.
Daily news always seems more informative than rolling news; weekly news is typically more informative than daily news. A book is often better still.
So however much news you choose to read, make sure you spend time looking for longer-term, slower-paced information. You will notice things—good and bad—that others ignore.
notice your feelings about the claim;
constructively sense-checking the claim against your personal experience;
asking yourself if you really understand what ...
This highlight has been truncated due to consecutive passage length restrictions.
Step back and look for information that can put the claim into context. Try to g...
This highlight has been truncated due to consecutive passage length restrictions.
If successes are celebrated while failures languish out of sight (which is often the situation), then we see a very strange slice of the whole picture.
only evidence of precognition was publishable because only evidence of precognition was surprising. Studies showing no evidence of precognition are like bombers that have been shot in the engine: no matter how often such things happen, they’re not going to make it to where we can see them.
this particular flavor of survivorship bias is called “publication bias.” Interesting findings are published; non-findings, or failures to replicate previous findings, face a higher publication hurdle.
not only are journals predisposed to publish surprising results, researchers facing “publish or perish” incentives are more likely to submit surprising results that may not stand up to scrutiny.
flukes are likely to be disproportionately published.
Testing a hypothesis using the numbers that helped form the hypothesis in the first place is not OK.
The standard statistical methods are designed to exclude most chance results.[19] But a combination of publication bias and loose research practices means we can expect that mixed in with the real discoveries will be a large number of statistical accidents.
The problems are obvious. Five percent is an arbitrary cutoff point—why not 6 percent, or 4 percent?—and it encourages us to think in black-and-white, pass-or-fail terms, instead of embracing degrees of uncertainty.
Conceptually, statistical significance is baffling, almost backward: it tells us the chance of observing the data given a particular theory, the theory that there is no effect. Really, we’d like to know the opposite, the probability of a particular theory being true, given the data.
Ask yourself if the journalist reporting on the research has clearly explained what’s being measured. Was this a study done with humans? Or mice? Or in a petri dish? A good reporter will be clear. Then: How large is the effect? Was this a surprise to other researchers? A good journalist will try to make space to explain—and the article will be much more fun to read as a result, satisfying your curiosity and helping you to understand.
algorithms have become tools for finding patterns in large sets of data.