More on this book
Community
Kindle Notes & Highlights
by
Tim Harford
Read between
October 29, 2022 - January 26, 2023
Of course, we shouldn’t be credulous—yet the antidote to credulity isn’t to believe nothing, but to have the confidence to assess information with curiosity and a healthy skepticism.
I worry about a world in which many people will believe anything, but I worry far more about one in which people believe nothing beyond their own preconceptions.
We often find ways to dismiss evidence that we don’t like. And the opposite is true, too: when evidence seems to support our preconceptions, we are less likely to look too closely for flaws.
Motivated reasoning is thinking through a topic with the aim, conscious or unconscious, of reaching a particular kind of conclusion.
Benjamin Franklin commented, “So convenient a thing it is to be a reasonable creature, since it enables us to find or make a reason for everything one has a mind to do.”
giving people more information seems actively to polarize them on the question of climate change.
When we encounter evidence that we dislike, we ask ourselves, “Must I believe this?” More detail will often give us more opportunity to find holes in the argument. And when we encounter evidence that we approve of, we ask a different question: “Can I believe this?” More detail means more toeholds on to which that belief can cling.
The counterintuitive result is that presenting people with a detailed and balanced account of both sides of the argument may actually push people away from the center rather than pull them in. If we already have strong opinions, then we’ll seize upon welcome evidence, but we’ll find opposing data or arguments irritating. This biased assimilation of new evidence means that the more we know, the more partisan we’re able to be on a fraught issue.
When we encounter a statistical claim about the world and are thinking of sharing it on social media or typing a furious rebuttal, we should instead ask ourselves, “How does this make me feel?”[*] We should do this not just for our own sake, but as a social duty. We’ve seen how powerful social pressure can be in influencing what we believe and how we think. When we slow down, control our emotions and our desire to signal partisan affiliation, and commit ourselves to calmly weighing the facts, we’re not just thinking more clearly—we are also modeling clear thinking for others.
Psychologists have a name for our tendency to confuse our own perspective with something more universal: it’s called “naive realism,” the sense that we are seeing reality as it truly is, without filters or errors.
These news reports are data, in a way. They’re just not representative data. But they certainly influence our views of the world.
Social scientists have long understood that statistical metrics are at their most pernicious when they are being used to control the world, rather than try to understand it.
“Any observed statistical regularity will tend to collapse once pressure is placed upon it for control purposes.”
“When a measure becomes a target, it ceases to be a good measure.”)
In statistics, as elsewhere, hard logic and personal impressions work best when they reinforce and correct each other.
Forty-two million people have more than a million dollars each, collectively owning about $142 trillion. A few of them are billionaires, but most are not. If you have a nice house with no mortgage in a place such as London, New York, or Tokyo, that might easily be enough to put you in this group. So would the right to a good private pension.[*] [19] Nearly 1 percent of the world’s adult population are in this group.
Four hundred thirty-six million people, with more than $100,000 but less than a million, collectively own another $125 trillion. Nearly 10 percent of the world’s adult population are in this second group.
Another billion people have more than $10,000 but less than $100,000; they own about $45 trillion among them.
Often we hear someone make a vague assertion like “inequality has risen” and we can’t even guess that much: inequality of what, between whom, and measured how?
The good stories are everywhere. They are not made memorable by their rarity; they are made forgettable by their ubiquity. Good things happen so often that they cannot seriously be considered for inclusion in a newspaper. “An Estimated 154,000 People Escaped from Poverty Yesterday!” True, but not news.
Nassim Nicholas Taleb, author of The Black Swan, puts it succinctly: “To be completely cured of newspapers, spend a year reading the previous week’s newspapers.”[27]
Daily news always seems more informative than rolling news; weekly news is typically more informative than daily news. A book is often better still. Even within a daily or a weekly newspaper, I find myself preferring the slower-paced explanation and analysis rather than the breaking news.
So however much news you choose to read, make sure you spend time looking for longer-term, slower-paced information. You will notice things—good and bad—that others ignore.
“In each human coupling, a thousand million sperm vie for a single egg. Multiply those odds by countless generations . . . it was you, only you, that emerged. To distill so specific a form from that chaos of improbability, like turning air to gold . . . that is the crowning unlikelihood . . .” “You could say that about anybody in the world!” “Yes. Anybody in the world . . . But the world is so full of people, so crowded with these miracles, that they become commonplace and we forget . . .” • Alan Moore, Watchmen
Normally, when we talk of bias we think of a conscious ideological slant. But many biases emerge from the way the world presents some stories to us while filtering out others.
Such bias is everywhere. Most of the books people read are bestsellers—but most books are not bestsellers, and most book projects never become books at all.
There’s nothing wrong with doing a large study. In general, more data is better. But if data are gathered bit by bit, testing as we go, then the standard statistical tests aren’t valid. Those tests assume that the data have simply been gathered, then tested—not that scientists have collected some data, tested them, and then maybe collected a bit more.
Scientists sometimes call this practice “HARKing”—HARK is an acronym for Hypothesizing After Results Known. To be clear, there’s nothing wrong with gathering data, poking around to find the patterns, and then constructing a hypothesis. That’s all part of science. But you then have to get new data to test the hypothesis. Testing a hypothesis using the numbers that helped form the hypothesis in the first place is not OK.
The famous psychological results are famous not because they are the most rigorously demonstrated, but because they’re interesting. Fluke results are far more likely to be surprising, and so far more likely to hit that Goldilocks level of counterintuitiveness (not too absurd, but not too predictable) that makes them so fascinating. The “interestingness” filter is enormously powerful.
It is all too tempting to assume that what we do not measure simply does not exist.
Sample error reflects the risk that, purely by chance, a randomly chosen sample of opinions does not reflect the true views of the population. The “margin of error” reported in opinion polls reflects this risk, and the larger the sample, the smaller the margin of error.
sampling error has a far more dangerous friend: sampling bias. Sampling error is when a randomly chosen sample doesn’t reflect the underlying population purely by chance; sampling bias is when the sample isn’t randomly chosen at all.
One thing is certain. If algorithms are shown a skewed sample of the world, they will reach a skewed conclusion.
computers trained on our own historical biases will repeat those biases at the very moment we’re trying to leave them behind us.
Keeping the algorithms and the datasets under wraps is the mind-set of the alchemist. Sharing them openly so they can be analyzed, debated, and—hopefully—improved on? That’s the mind-set of the scientist.
we don’t and shouldn’t trust in general: we trust specific people or institutions to do specific things.
Just like people, algorithms are neither trustworthy nor untrustworthy as a general class. Just as with people, rather than asking, “Should we trust algorithms?” we should ask, “Which algorithms can we trust, and what can we trust them to do?”
“statistics in themselves don’t deliver benefits. It’s the use of statistics that delivers benefits through better, quicker decisions by governments, companies, charities and individuals.”
This book started with a warning that we should notice our emotional response to the factual claims around us. Just so: pictures engage the imagination and the emotion, and are easily shared before we have time to think a little harder. If we don’t, we’re allowing ourselves to be dazzled.
A good chart isn’t an illustration but a visual argument,” declares Alberto Cairo near the beginning of his book How Charts Lie.
When you look at data visualizations, you’ll do much better if you recognize that someone may well be trying to persuade you of something.
“For superforecasters, beliefs are hypotheses to be tested, not treasures to be guarded,” wrote Philip Tetlock
ten statistical commandments in this book. First, we should learn to stop and notice our emotional reaction to a claim, rather than accepting or rejecting it because of how it makes us feel. Second, we should look for ways to combine the “bird’s eye” statistical perspective with the “worm’s eye” view from personal experience. Third, we should look at the labels on the data we’re being given, and ask if we understand what’s really being described. Fourth, we should look for comparisons and context, putting any claim into perspective. Fifth, we should look behind the statistics at where they
...more
Amir-massoud liked this
The philosopher Onora O’Neill once declared, “Well-placed trust grows out of active inquiry rather than blind acceptance.”
“scientific curiosity.” That’s different from scientific literacy. The two qualities are correlated, of course, but there are curious people who know rather little about science (yet), and highly trained people with little appetite to learn more.
the more curious we are, the less our tribalism seems to matter.
one of our stubborn defenses against changing our minds is that we’re good at filtering out or dismissing unwelcome information. A curious person, however, enjoys being surprised and hungers for the unexpected. He or she will not be filtering out surprising news, because it’s far too intriguing.
Neuroscientific studies suggest that the brain responds in much the same anxious way to facts that threaten our preconceptions as it does to wild animals that threaten our lives.[6] Yet for someone in a curious frame of mind, in contrast, a surprising claim need not provoke anxiety. It can be an engaging mystery, or a puzzle to solve.
As Loewenstein puts it, curiosity starts to glow when there’s a gap “between what we know and what we want to know.” There’s a sweet spot for curiosity: if we know nothing, we ask no questions; if we know everything, we ask no questions either. Curiosity is fueled once we know enough to know that we do not know.
It’s a rather beautiful discovery: in a world where so many people seem to hold extreme views with strident certainty, you can deflate somebody’s overconfidence and moderate their politics simply by asking them to explain the details. Next time you’re in a politically heated argument, try asking your interlocutor not to justify herself, but simply to explain the policy in question.