Hello World: How to be Human in the Age of the Machine
Rate it:
1%
Flag icon
There’s a reason for this strange design. In the 1920s, Robert Moses, a powerful New York urban planner, was keen to keep his newly finished, award-winning state park at Jones Beach the preserve of white and wealthy Americans. Knowing that his preferred clientele would travel to the beach in their private cars, while people from poor black neighbourhoods would get there by bus, he deliberately tried to limit access by building hundreds of low-lying bridges along the highway. Too low for the 12-foot buses to pass under.
1%
Flag icon
Modern inventions are no different. Just ask the residents of Scunthorpe in the north of England, who were blocked from opening AOL accounts after the internet giant created a new profanity filter that objected to the name of their town.
3%
Flag icon
Prioritization: making an ordered list
3%
Flag icon
Classification: picking a category
4%
Flag icon
Association: finding links
4%
Flag icon
Filtering: isolating what’s important
4%
Flag icon
Rule-based algorithms
4%
Flag icon
Machine-learning algorithms
5%
Flag icon
Although AI has come on in leaps and bounds of late, it is still only ‘intelligent’ in the narrowest sense of the word. It would probably be more useful to think of what we’ve been through as a revolution in computational statistics than a revolution in intelligence. I know that makes it sound a lot less sexy (unless you’re really into statistics), but it’s a far more accurate description of how things currently stand. For the time being, worrying about evil AI is a bit like worrying about overcrowding on Mars.fn1
6%
Flag icon
In 2012, a number of disabled people in Idaho were informed that their Medicaid assistance was being cut.22 Although they all qualified for benefits, the state was slashing their financial support – without warning – by as much as 30 per cent,23 leaving them struggling to pay for their care.
7%
Flag icon
In my years working as a mathematician with data and algorithms, I’ve come to believe that the only way to objectively judge whether an algorithm is trustworthy is by getting to the bottom of how it works. In my experience, algorithms are a lot like magical illusions. At first they appear to be nothing short of actual wizardry, but as soon as you know how the trick is done, the mystery evaporates.
8%
Flag icon
This is a debate with a long history. In 1954, Paul Meehl, a professor of clinical psychology at the University of Minnesota, annoyed an entire generation of humans when he published Clinical versus Statistical Prediction, coming down firmly on one side of the argument.38 In his book, Meehl systematically compared the performance of humans and algorithms on a whole variety of subjects – predicting everything from students’ grades to patients’ mental health outcomes – and concluded that mathematical algorithms, no matter how simple, will almost always make better predictions than people.
8%
Flag icon
It’s like the saying among airline pilots that the best flying team has three components: a pilot, a computer and a dog. The computer is there to fly the plane, the pilot is there to feed the dog. And the dog is there to bite the human if it tries to touch the computer.
8%
Flag icon
It’s known to researchers as algorithm aversion. People are less tolerant of an algorithm’s mistakes than of their own – even if their own mistakes are bigger.
8%
Flag icon
This tendency of ours to view things in black and white – seeing algorithms as either omnipotent masters or a useless pile of junk – presents quite a problem in our high-tech age.
9%
Flag icon
This was the motivation behind a ground-breaking trial run in 1993 by the British supermarket Tesco. Under the guidance of husband-and-wife team Edwina Dunn and Clive Humby, and beginning in certain selected stores, Tesco released its brand-new Clubcard – a plastic card, the size and shape of a credit card, that customers could present at a checkout when paying for their shopping. The exchange was simple. For each transaction using a Clubcard, the customer would collect points that they could use against future purchases in store, while Tesco would take a record of the sale and associate it ...more
12%
Flag icon
Unfortunately, in many countries, the law doesn’t do much to protect you. Data brokers are largely unregulated and – particularly in America – opportunities to curb their power have repeatedly been passed over by government. In March 2017, for instance, the US Senate voted to eliminate rules that would have prevented data brokers from selling your internet browser history without your explicit consent.
12%
Flag icon
As the team explained in a presentation at DEFCON in 2017, de-anonymizing huge databases of browser history was spectacularly easy. Here’s how it worked. Sometimes there were direct clues to the person’s identity in the URLs themselves. Like anyone who visited Xing.com, the German equivalent of LinkedIn. If you click on your profile picture on the Xing website, you are sent through to a page with an address that will be something like the following: www.xing.com/profile/Hannah_Fry?sc_omxb_p
13%
Flag icon
And that is where we start to stray very far over the creepy line. When private, sensitive information about you, gathered without your knowledge, is then used to manipulate you. Which, of course, is precisely what happened with the British political consulting firm Cambridge Analytica.
13%
Flag icon
Back in 2012, a year before Cambridge Analytica came on the scene, a group of scientists from the University of Cambridge and Stanford University began looking for a link between the five personality traits and the pages people ‘liked’ on Facebook.17 They built a Facebook quiz with this purpose in mind, allowing users to take real psychometric tests, while hoping to find a connection between a person’s true character and their online persona. People who downloaded their quiz knowingly handed over data on both: the history of their Likes on Facebook and, through a series of questions, their ...more
13%
Flag icon
The research was a success. With a connection established, the team built an algorithm that could infer someone’s personality from their Facebook Likes alone.
14%
Flag icon
And online, it’s no different from what Obama and Clinton were doing during their campaigns. Every major political party in the Western world uses extensive analysis and micro-targeting of voters.
14%
Flag icon
Let’s assume, for the sake of argument, that all the above is true: Cambridge Analytica served up manipulative fake news stories on Facebook to people based on their psychological profiles. The question is, did it work?
14%
Flag icon
controversial experiment run by Facebook employees in 2013 manipulated the news feeds of 689,003 users without their knowledge (or consent) in an attempt to control their emotions and influence their moods.23
14%
Flag icon
All of the above is true, but the actual effects are tiny. In the Facebook experiment, users were indeed more likely to post positive messages if they were shielded from negative news. But the difference amounted to less than one-tenth of one percentage point. Likewise, in the targeted adverts example, the makeup sold to introverts was more successful if it took into account the person’s character, but the difference it made was minuscule. A generic advert got 31 people in 1,000 to click on it. The targeted ad managed 35 in 1,000. Even that figure of 50 per cent improvement that I cited here, ...more
15%
Flag icon
And when you remember that, as Jamie Bartlett pointed out in a piece for the Spectator, Trump won Pennsylvania by 44,000 votes out of six million cast, Wisconsin by 22,000, and Michigan by 11,000, perhaps margins of less than 1 per cent might be all you need.24
15%
Flag icon
That was the deal that we made. Free technology in return for your data and the ability to use it to influence and profit from you. The best and worst of capitalism in one simple swap.
15%
Flag icon
It’s known as Sesame Credit, a citizen scoring system used by the Chinese government.
15%
Flag icon
If you’re Chinese, these scores matter. If your rating is over 600 points, you can take out a special credit card. Above 666 and you’ll be rewarded with a higher credit limit. Those with scores above 650 can hire a car without a deposit and use a VIP lane at Beijing airport. Anyone over 750 can apply for a fast-tracked visa to Europe.
17%
Flag icon
In a more recent study, 81 UK judges were asked whether they’d award bail to a number of imaginary defendants.10 Each hypothetical case had its own imaginary back-story and imaginary criminal history. Just like their counterparts in the Virginia study, the British judges failed to agree unanimously on a single one of the 41 cases presented to them.11 But this time, in among the 41 hypothetical cases given to every judge were seven that appeared twice – with the names of the defendants changed on their second appearance so the judge wouldn’t notice they’d been duplicated. It was a sneaky trick, ...more
21%
Flag icon
Unfortunately for Zilly, Wisconsin judges were using a proprietary risk-assessment algorithm called COMPAS. As with the Idaho budget tool in the ‘Power’ chapter, the inner workings of COMPAS are considered a trade secret. Unlike the budget tool, however, the COMPAS code still isn’t available to the public.
22%
Flag icon
None of these four statements seems like a particularly grand ambition. But there is, nevertheless, a problem. Unfortunately, some kinds of fairness are mathematically incompatible with others. Let me explain. Imagine stopping people in the street and using an algorithm to predict whether each person will go on to commit a homicide. Now, since the vast majority of murders are committed by men (in fact, worldwide, 96 per cent of murderers are male)37, if the murderer-finding algorithm is to make accurate predictions, it will necessarily identify more men than women as high risk.
22%
Flag icon
Unless the fraction of people who commit crimes is the same in every group of defendants, it is mathematically impossible to create a test which is equally accurate at prediction across the board and makes false positive and false negative mistakes at the same rate for every group of defendants.
23%
Flag icon
Now, if it so chose, Google could subtly tweak its algorithm to prioritize images of female or non-white professors over others, to even out the balance a little and reflect the society we’re aiming for, rather than the one we live in. It’s the same in the justice system. Effectively, using an algorithm lets us ask: what percentage of a particular group do we expect to be high risk in a perfectly fair society? The algorithm gives us the option to jump straight to that figure. Or, if we decide that removing all the bias from the judicial system at once isn’t appropriate, we could instead ask ...more
24%
Flag icon
Like those signs in supermarkets that say ‘Limit of 12 cans of soup per customer’. They aren’t designed to ward off soup fiends from buying up all the stock, as you might think. They exist to subtly manipulate your perception of how many cans of soup you need. The brain anchors with the number 12 and adjusts downwards. One study back in the 1990s showed that precisely such a sign could increase the average sale per customer from 3.3 tins of soup to 7.52
24%
Flag icon
By now, you won’t be surprised to learn that judges are also susceptible to the anchoring effect. They’re more likely to award higher damages if the prosecution demands a high amount,53 and hand down a longer sentence if the prosecutor requests a harsher punishment.
24%
Flag icon
Put simply, Weber’s Law states that the smallest change in a stimulus that can be perceived, the so-called ‘Just Noticeable Difference’, is proportional to the initial stimulus.
24%
Flag icon
If a crime is marginally worse than something deserving a 20-year sentence, an additional 3 months, say, doesn’t seem enough: it doesn’t feel there’s enough of a difference between a stretch of 20 years and one of 20 years and 3 months. But of course there is: 3 months in prison is still 3 months in prison, regardless of what came before. And yet, instead of adding a few months on, judges will jump to the next noticeably different sentence length, which in this case is 25 years.
24%
Flag icon
We know this is happening because we can compare the sentence lengths actually handed out to those Weber’s Law would predict. One study from 2017 looked at over a hundred thousand sentences in both Britain and Australia and found that up to 99 per cent of defendants deemed guilty were given a sentence that fits the formula.
25%
Flag icon
The more I’ve read, the more people I’ve spoken to, the more I’ve come to believe that we’re expecting a bit too much of our human judges. Injustice is built into our human systems. For every Christopher Drew Brooks, treated unfairly by an algorithm, there are countless cases like that of Nicholas Robinson, where a judge errs on their own. Having an algorithm – even an imperfect algorithm – working with judges to support their often faulty cognition is, I think, a step in the right direction. At least a well-designed and properly regulated algorithm can help get rid of systematic bias and ...more
37%
Flag icon
Now, little more than a decade later, it’s widely accepted that the future of transportation is driverless. In late 2017, Philip Hammond, the British Chancellor of the Exchequer, announced the government’s intention to have fully driverless cars – without a safety attendant on board – on British roads by 2021. Daimler has promised driverless cars by 2020,15 Ford by 2021,16 and other manufacturers have made their own, similar forecasts.
39%
Flag icon
It’s no exaggeration to say that Bayes’ theorem is one of the most influential ideas in history. Among scientists, machine-learning experts and statisticians, it commands an almost cultish enthusiasm. Yet at its heart the idea is extraordinarily simple. So simple, in fact, that you might initially think it’s just stating the obvious.
40%
Flag icon
But Bayes’ theorem isn’t just an equation for the way humans already make decisions. It’s much more important than that. To quote Sharon Bertsch McGrayne, author of The Theory That Would Not Die: ‘Bayes runs counter to the deeply held conviction that modern science requires objectivity and precision.’31 By providing a mechanism to measure your belief in something, Bayes allows you to draw sensible conclusions from sketchy observations, from messy, incomplete and approximate data – even from ignorance.
41%
Flag icon
Bayes’ theorem and the power of probability have driven much of the innovation in autonomous vehicles ever since the DARPA challenge. I asked Paul Newman, professor of robotics at the University of Oxford and founder of Oxbotica, a company that builds driverless cars and tests them on the streets of Britain, how his latest autonomous vehicles worked, and he explained as follows: ‘It’s many, many millions of lines of code, but I could frame the entire thing as probabilistic inference. All of it.’
42%
Flag icon
In the words of one participant in a 2016 focus group at the London School of Economics: ‘You’re going to mug them right off. They’re going to stop and you’re just going to nip round.’ Translation: these cars can be bullied. Stilgoe agrees: ‘People who’ve been relatively powerless on roads up’til now, like cyclists, may start cycling very slowly in front of self-driving cars, knowing that there is never going to be any aggression.’
42%
Flag icon
The vision we’ve come to believe in is like a trick of the light. A mirage that promises a luxurious private chauffeur for all but, close up, is actually just a local minibus. If you still need persuading, I’ll leave the final word on the matter to one of America’s biggest automotive magazines – Car and Driver: No car company actually expects the futuristic, crash-free utopia of streets packed with driverless vehicles to transpire anytime soon, nor for decades.
44%
Flag icon
Build a machine to improve human performance, she explained, and it will lead – ironically – to a reduction in human ability. By now, we’ve all borne witness to this in some small way. It’s why people can’t remember phone numbers any more, why many of us struggle to read our own handwriting and why lots of us can’t navigate anywhere without GPS. With technology to do it all for us, there’s little opportunity to practise our skills.
47%
Flag icon
Rossmo wasn’t the first person to suggest that criminals unwittingly create geographical patterns. His ideas have a lineage that dates back to the 1820s, when André-Michel Guerry, a lawyer-turned-statistician who worked for the French Ministry of Justice, started collecting records of the rapes, murders and robberies that occurred in the various regions of France.5
47%
Flag icon
But Guerry’s analysis of his national census of criminals suggested otherwise. No matter where you were in France, he found, recognizable patterns appeared in what crimes were committed, how – and by whom. Young people committed more crimes than old, men more than women, poor more than rich. Intriguingly, it soon became clear that these patterns didn’t change over time. Each region had its own set of crime statistics that would barely change year on year. With an almost terrifying exactitude, the numbers of robberies, rapes and murders would repeat themselves from one year to the next.
47%
Flag icon
These two key patterns – distance decay and the buffer zone – hidden among the geography of the most serious crimes, were at the heart of Rossmo’s algorithm. Starting with a crime scene pinned on to a map, Rossmo realized he could mathematically balance these two factors and sketch out a picture of where the perpetrator might live.
« Prev 1