More on this book
Community
Kindle Notes & Highlights
Read between
April 7 - April 8, 2020
The spread of gun violence shares these features: violence is often transmitted through known social links, and the gap between one shooting and the next is long enough for interrupters to intervene. If shootings were more random, or the gap between them was always much shorter, violence interruption wouldn’t be so effective.
In the world of public health advocacy, few have been as effective – or as pioneering – as Florence Nightingale.
According to carl bell, a public health specialist at the University of Chicago, three things are required to stop an epidemic: an evidence base, a method for implementation, and political will.[40]
Granovetter pointed out that this situation would lead to an inevitable domino effect: the person with a 0 threshold would start rioting, triggering the person with a threshold of 1, which would trigger the person with a threshold of 2. This would continue until the entire crowd was rioting.
As models become more complicated, with lots of different features and assumptions, it gets harder to identify their flaws.
This creates a problem, because even the most sophisticated mathematical models are a simplification of a messy, complex reality.
What’s more, additional features may not make a model better at representing what we need it to. When it comes to building models, there is always a risk of confusing detail with accuracy.
From disease epidemics to terrorism and crime, forecasts can help agencies plan and allocate resources. They can also help draw attention to a problem, persuading people that there is a need to allocate resources in the first place.
We face a paradox when it comes to forecasting outbreaks. Although pessimistic weather forecasts won’t affect the size of a storm, outbreak predictions can influence the final number of cases. If a model suggests the outbreak is a genuine threat, it may trigger a major response from health agencies. And if this brings the outbreak under control, it means the original forecast will be wrong.
In the field of public health, people often refer to disease control measures as ‘removing the pumphandle.’
There’s just one problem with this phrase: when the pumphandle came off on 8 September 1854, London’s cholera outbreak was already well in decline. Most of the people at risk had either caught the infection already, or fled the area. If we’re being accurate, ‘removing the pumphandle’ should really refer to a control measure that’s useful in theory, but delivered too late.
Generally, we can trace problems with a forecast back to either the model itself or the data that goes into it. A good rule of thumb is that a mathematical model should be designed around the data available.
Outside my field, I’ve found that people generally respond to mathematical analysis in one of two ways. The first is with suspicion. This is understandable: if something is opaque and unfamiliar, our instinct can be to not trust it. As a result, the analysis will probably be ignored. The second kind of response is at the other extreme. Rather than ignore results, people may have too much faith in them.
According to statistician George Box, it’s not just observers who can be seduced by mathematical analysis. ‘Statisticians, like artists, have the bad habit of falling in love with their models,’ he supposedly once said.[72]
Likewise, if we know the delay in reporting during an outbreak, we can adjust how we interpret the outbreak curve. Such ‘nowcasting’, which aims to understand the situation as it currently stands, is often necessary before forecasts can be made.
Although some local-level data is available sooner, it can take a long time to build up a national picture of the crisis. ‘We’re always looking backwards,’ said Rosalie Liccardo Pacula, a senior economist at the RAND Corporation, which specialises in public policy research.
When they looked at data between 1979 and 2016, they found that the number of overdose deaths in the US grew exponentially during this period, with the death rate doubling every ten years.[77]
Depending on whether we look at illnesses or deaths, we get two slightly different impressions of the outbreak.
In the case of opioids, exposure often starts with a prescription. It might be tempting to simply blame patients for taking too much medication, or doctors for overprescribing. But we must also consider the pharmaceutical companies who market strong opioids directly to doctors. And insurance companies, who are often more likely to fund painkillers than alternatives like physiotherapy. Our modern lifestyles also play a role, with rising chronic pain associated with increases in obesity and office-based work.
One of the best ways to slow an epidemic in its early stages is to reduce the number of people who are susceptible.
Once the number of new users peaks, we enter the middle stage of a drug epidemic. At this point, there are still a lot of existing users, who may be progressing towards heavier drug use, and potentially moving on to illegal drugs as they lose their access to prescriptions.
In the final stage of a drug epidemic, the number of new and existing users is declining, but a group of heavy users remains.
Because the success of different control strategies can vary between the three stages of a drug epidemic, it’s crucial to know what stage we’re currently in.
It’s a common problem in outbreak analysis: things that aren’t reported are by definition tough to analyse.
crime. ‘Because these predictions are likely to over-represent areas that were already known to police, officers become increasingly likely to patrol these same areas and observe new criminal acts that confirm their prior beliefs regarding the distributions of criminal activity.’[82]
‘When you’re training it with data that’s generated by the same system in which minority people are more likely to be arrested for the same behaviour, you’re just going to perpetuate those same issues,’ she said. ‘You have the same problems, but now filtered through this high-tech tool.’
In 2013, researchers at RAND Corporation outlined four common myths about predictive policing.[83]
The first was that a computer knows exactly what will happen in the future. ‘These algorithms predict the risk of future events, not the events themselves,’ they noted.
The second myth was that a computer would do everything, from collecting relevant crime data to making appropriate recommendations. In reality, computers work best when they assist human analysis and decisi...
This highlight has been truncated due to consecutive passage length restrictions.
The third myth was that police forces needed a high-powered model to make good predictions, whereas often the problem is getting hold of the right data. ‘Sometimes you have a dataset where the information you need to make the pre...
This highlight has been truncated due to consecutive passage length restrictions.
The final, and perhaps most persistent myth, was that accurate predictions automatically lead to reductions in crime. ‘Predictions, on their own, are just that – predictions,’ wrote the RAND team. ‘Actual decreases i...
This highlight has been truncated due to consecutive passage length restrictions.
What matters is having analysis that can reveal gaps in our understanding of a situation. ‘They are generally most useful when they identify impacts of policy decisions which are not predictable by commonsense,’ Whitty has suggested. ‘The key is usually not that they are “right”, but that they provide an unpredicted insight.’[84]
‘The real promise of using data analytics to identify those at risk of gunshot victimization lies not with policing, but within a broader public health approach.’ He suggested that predicted victims could benefit from the support of people like social workers, psychologists, and violence interrupters.
In 1980, for example, West Germany made it mandatory for motorcyclists to wear helmets. Over the next six years, motorcycle thefts fell by two thirds. The reason was simple: inconvenience. Thieves could no longer decide to steal a motorcycle on the spur of the moment.
There’s evidence that the presence of things like graffiti and stray shopping trolleys can make people far more likely to litter or use an out-of-bounds thoroughfare.[91]
Charlotte Watts has pointed out that domestic violence can be transmitted across generations, with affected children becoming involved in violence as adults.
‘What is ultimately most effective at changing a person’s behavior is when you try to sit down and try to listen to them and hear them out, let them air their grievances and really try to understand them,’ he said. ‘And then try to guide them to a healthier way of behaving.’
At the time, marketers were getting excited about the notion of ‘influencers’: everyday people who could spark social epidemics.
The idea was that by targeting a few unexpectedly well-connected people, companies could get ideas to spread much further for much less cost.
‘The whole thing that made it interesting to people in the marketing world was that they could get Oprah-like impact from small budgets,’ said Watts, who is now based at the University of Pennsylvania.[3]
The interesting version is that there are specific people – like Milgram’s clothing merchant – who play a massively disproportionate role in social contagion.
In Milgram’s smaller study, the clothing merchant had appeared to be a vital link, but this wasn’t the case for the e-mail chains.
Rather than sending the message to contacts who were especially popular or well connected, people tended to pick based on characteristics like location or occupation. The experiment showed that messages don’t need highly connected people to get to a specific destination.
If we want an idea to spread, we ideally need people to be both highly susceptible and highly influential.
‘Highly influential individuals tend not to be susceptible, highly susceptible individuals tend not to be influential, and almost no one is both highly influential and highly susceptible to influence,’ they noted.
Sparking multiple outbreaks across a network may therefore be more effective than trying to identify high profile influencers within a community.[10]
From political stances to conspiracy theories, social media communities frequently cluster around similar worldviews.[12] This creates the potential for ‘echo chambers’, in which people rarely hear views that contradict their own.
On social media, three main factors influence what we read: whether one of our contacts shares an article; whether that content appears in our feed; and whether we click on it. According to data from Facebook, all three factors can affect our consumption of information.
They found that the articles people saw on social media and search engines were generally more polarised than the ones they came across on their favourite news websites.[24]
When sociologists at Duke University got US volunteers to follow Twitter accounts with opposing views, they found that people tended to retreat further back into their own political territory afterwards.[25]