How to Make the World Add Up : Ten Rules for Thinking Differently About Numbers
Rate it:
3%
Flag icon
First, our emotions, our preconceptions and our political affiliations are capable of badly warping the way we interpret the evidence.
3%
Flag icon
Long before the coronavirus, I’d started to worry that this isn’t an attitude that helps us today. We’ve lost our sense that statistics might help us make the world add up. It’s not that we feel every statistic is a lie, but that we feel helpless to pick out the truths.
3%
Flag icon
This statistical cynicism is not just a shame – it’s a tragedy. If we give in to a sense that we no longer have the power to figure out what’s true, then we’ve abandoned a vital tool.
5%
Flag icon
I worry about a world in which many people will believe anything, but I worry far more about one in which people believe nothing beyond their own preconceptions.
5%
Flag icon
Doubt is a powerful weapon, and statistics are a vulnerable target.
6%
Flag icon
Many of us refuse to look at statistical evidence because we’re afraid of being tricked. We think we’re being worldly-wise by adopting the Huff approach of cynically dismissing all statistics. But we’re not. We’re admitting defeat to the populists and propagandists who want us to shrug, give up on logic and evidence, and retreat into believing whatever makes us feel good.
7%
Flag icon
Working out how van Meegeren fooled Bredius teaches us much more than a footnote in the history of art; it explains why we buy things we don’t need, fall for the wrong kind of romantic partner, and vote for politicians who betray our trust.
7%
Flag icon
We often find ways to dismiss evidence that we don’t like. And the opposite is true, too: when evidence seems to support our preconceptions, we are less likely to look too closely for flaws. The more extreme the emotional reaction, the harder it is to think straight.
7%
Flag icon
‘the ostrich effect’.
7%
Flag icon
Ask yourself: how does this information make me feel? Do I feel vindicated or smug? Anxious, angry or afraid? Am I in denial, scrambling to find a reason to dismiss the claim?
8%
Flag icon
Before I repeat any statistical claim, I first try to take note of how it makes me feel.
8%
Flag icon
Our emotions are powerful. We can’t make them vanish, and nor should we want to. But we can, and should, try to notice when they are clouding our judgement.
8%
Flag icon
‘motivated reasoning’. Motivated reasoning is thinking through a topic with the aim, conscious or unconscious, of reaching a particular kind of conclusion.
8%
Flag icon
In a football game, we see the fouls committed by the other team but overlook the sins of our own side. We are more likely to notice what we want to notice.
9%
Flag icon
Modern social science agrees with Molière and Franklin: people with deeper expertise are better equipped to spot deception, but if they fall into the trap of motivated reasoning, they are able to muster more reasons to believe whatever they really wish to believe.
10%
Flag icon
The counterintuitive result is that presenting people with a detailed and balanced account of both sides of the argument may actually push people away from the centre rather than pull them in. If we already have strong opinions, then we’ll seize upon welcome evidence, but we’ll find opposing data or arguments irritating. This ‘biased assimilation’ of new evidence means that the more we know, the more partisan we’re able to be on a fraught issue.
11%
Flag icon
Our emotional reaction to a statistical or scientific claim isn’t a side issue. Our emotions can, and often do, shape our beliefs more than any logic. We are capable of persuading ourselves to believe strange things, and to doubt solid evidence, in service of our political partisanship, our desire to keep drinking coffee, our unwillingness to face up to the reality of our HIV diagnosis, or any other cause that invokes an emotional response.
11%
Flag icon
When you see a statistical claim, pay attention to your own reaction. If you feel outrage, triumph, denial, pause for a moment. Then reflect. You don’t need to be an emotionless robot, but you could and should think as well as feel.
11%
Flag icon
Today’s persuaders don’t want you to stop and think. They want you to hurry up and feel. Don’t be rushed.
14%
Flag icon
Psychologists have a name for our tendency to confuse our own perspective with something more universal: it’s called ‘naive realism’, the sense that we are seeing reality as it truly is, without filters or errors.
14%
Flag icon
We are surprised when an election goes against us: everyone in our social circle agreed with us, so why did the nation vote otherwise? Opinion polls don’t always get it right, but I can assure you they have a better track record of predicting elections than simply talking to your friends.
15%
Flag icon
In my book Messy, I spent a chapter discussing similar examples. There was the time the UK government collected data on how many days people had to wait for an appointment when they called their doctor, which is a useful thing to know. But then the government set a target to reduce the average waiting time. Doctors logically responded by refusing to take any advance bookings at all; patients had to phone up every morning and hope they happened to be among the first to get through. Waiting times became, by definition, always less than a day.
15%
Flag icon
Social scientists have long understood that statistical metrics are at their most pernicious when they are being used to control the world, rather than try to understand it.
15%
Flag icon
‘Any observed statistical regularity will tend to collapse once pressure is placed upon it for control purposes.’12 (Or, more pithily: ‘When a measure becomes a target, it ceases to be a good measure.’)
15%
Flag icon
‘The more any quantitative social indicator is used for social decision-making, the more subject it will be to corruption pressures and the more apt it will be to distort and corrupt ...
This highlight has been truncated due to consecutive passage length restrictions.
17%
Flag icon
There is an important lesson here. Often, looking for an explanation really means looking for someone to blame.
18%
Flag icon
For example, a policy paper published in the UK in 2017 by the Brexit lobby group Leave Means Leave called for a ‘five-year freeze on unskilled immigration’.8 Is that a good idea? Hard to say until we know what the idea really is: by now, we should know to ask, ‘What do you mean by “unskilled”?’ The answer, on closer inspection, is that you’re unskilled if you don’t have a job offer on a salary of at least £35,000 – a level that would rule out the majority of nurses, primary school teachers, technicians, paralegals and chemists.
telleryl liked this
20%
Flag icon
in the United Kingdom between 1990 and 2017. After taxes, the top 1 per cent saw their share of income rise, but inequality among lower-earning households fell as poorer households tended to catch up on those with middling incomes. It’s an awkward story for anyone who wants an easy answer, but in a complicated world we shouldn’t expect that the statistics will always come out neatly.
telleryl liked this
23%
Flag icon
that we should all carry a few ‘landmark numbers’ in our heads to allow easy comparison.11 A few examples: • The population of the United States is 325 million people. The population of the United Kingdom is 65 million. The population of the world is 7.5 billion. • Name any particular age (under the age of sixty). There are about 800,000 people of that age in the UK. If a policy involves all three-year-olds, for example, there are 800,000 of them. In the US, there are about 4 million people of any particular age (under the age of sixty).
23%
Flag icon
Length of a bed: 2 metres (or 7 feet). As Elliott points out, this helps you visualise the size of a room: how many beds is that?
23%
Flag icon
in general we tend to be rather optimistic; psychologist Tali Sharot reckons that 80 per cent of us suffer from an ‘optimism bias’, systematically overestimating our longevity, our career prospects and our talents while being blind to the risk of illness, incompetence or divorce.
28%
Flag icon
To be clear, there’s nothing wrong with gathering data, poking around to find the patterns and then constructing a hypothesis. That’s all part of science. But you then have to get new data to test the hypothesis. Testing a hypothesis using the numbers that helped form the hypothesis in the first place is not OK.
33%
Flag icon
The power to not collect data is one of the most important and little-understood sources of power that governments have . . . By refusing to amass knowledge in the first place, decision-makers exert power over the rest of us.
34%
Flag icon
One grim landmark was thalidomide, which was widely taken by pregnant women to ease morning sickness only for it to emerge that the drug could cause severe disability and death to unborn children. Following this disaster, women of childbearing age were routinely excluded from trials, as a precaution.
35%
Flag icon
The United Nations, for example, has embraced a series of ambitious ‘Sustainable Development Goals’ for 2030. But development experts are starting to call attention to a problem: we often don’t have the data we would need to figure out whether those goals have been met. Are we succeeding in reducing the amount of domestic violence suffered by women? If few countries have chosen to collect good enough data on the problem to allow for historical comparisons, it’s very hard to tell.11
35%
Flag icon
when it comes to data, size isn’t everything. Opinion polls such as Gallup’s are based on samples of the voting population. This means opinion pollsters need to deal with two issues: sample error and sample bias. Sample error reflects the risk that, purely by chance, a randomly chosen sample of opinions does not reflect the true views of the population. The ‘margin of error’ reported in opinion polls reflects this risk, and the larger the sample, the smaller the margin of error.
35%
Flag icon
Sampling error is when a randomly chosen sample doesn’t reflect the underlying population purely by chance; sampling bias is when the sample isn’t randomly chosen at all.
36%
Flag icon
In the United States, for example, they are more likely than the population as a whole to be young, urban, college-educated and black. Women, meanwhile, are more likely than men to use Facebook and Instagram, but less likely to use LinkedIn. Hispanics are more likely than whites to use Facebook, while blacks are more likely than whites to use LinkedIn, Twitter and Instagram. None of these facts is obvious.
36%
Flag icon
Algorithms trained largely on pale faces and male voices, for example, may be confused when they later try to interpret the speech of women or the appearance of darker complexions. This is believed to help explain why Google photo software confused photographs of people with dark skin with photographs of gorillas;
37%
Flag icon
But we can and should remember to ask who or what might be missing from the data we’re being told about.
39%
Flag icon
So let’s start by toning down the hype a little – both the apocalyptic idea that Cambridge Analytica can read your mind, and the giddy prospect that big data can easily replace more plodding statistical processes such as the CDC’s survey of influenza cases.
39%
Flag icon
But it does matter when people in power are similarly overawed by algorithms they don’t understand, and use them to make life-changing decisions.
40%
Flag icon
orifices
40%
Flag icon
There is ample evidence that human judges aren’t terribly consistent. One way to test this is to show hypothetical cases to various judges and see if they reach different conclusions. They do. In one British study from 2001, judges were asked for judgements on a variety of cases; some of the cases (presented a suitable distance apart to disguise the subterfuge) were simply repeats of earlier cases, with names and other irrelevant details changed. The judges didn’t even agree with their own previous judgement on the identical case. That is one error that we can be fairly sure a computer would ...more
43%
Flag icon
She argues that we don’t and shouldn’t trust in general: we trust specific people or institutions to do specific things. (For example: I have a friend I’d never trust to post a letter for me – but I’d gladly trust him to take care of my children.) Trust should be discriminating: ideally we should trust the trustworthy, and distrust the incompetent or malign.33
43%
Flag icon
Information should be accessible: that implies it’s not hiding deep in some secret data vault. Decisions should be understandable – capable of being explained clearly and in plain language. Information should be usable – which may mean something as simple as making data available in a standard digital format. And decisions should be assessable – meaning that anyone with the time and expertise has the detail required to rigorously test any claims or decisions if they wish to.
49%
Flag icon
For better or worse, we want our governments to take action, and if they are to take action they need information. Statistics collected by the state make for better-informed policies – on crime, education, infrastructure and much else.
50%
Flag icon
With reliable statistics, citizens can hold their governments to account and those governments can make better decisions.
52%
Flag icon
Quetelet was the person who popularised the idea of taking the ‘average’ or ‘arithmetic mean’ of a group, which was a revolutionary way to summarise complex data with a single number.
52%
Flag icon
He also pioneered the idea that statistics could be used not just to analyse astronomical observations or the behaviour of gases, but social, psychological and medical questions such as the prevalence of suicide, obesity or crime. Babbage and Quetelet were later to be founders of the Royal Statistical Society; Nightingale, as I have mentioned, became its first female fellow.
« Prev 1