More on this book
Community
Kindle Notes & Highlights
by
Hannah Fry
Read between
September 17 - October 2, 2021
No object or algorithm is ever either good or evil in itself. It’s how they’re used that matters. GPS was invented to launch nuclear missiles and now helps deliver pizzas.
Although AI has come on in leaps and bounds of late, it is still only ‘intelligent’ in the narrowest sense of the word. It would probably be more useful to think of what we’ve been through as a revolution in computational statistics than a revolution in intelligence.
This was very much the conclusion I came to when studying AI Law and autonomous vehicles. I think the public perception of AI is more optimistic than the reality of its advances. I think in actuality you when break it down, it is more simple (for want of a better word, comprehensible maybe?) than I first thought.
Perhaps more ominous, given how much of our information we now get from algorithms like search engines, is how much agency people believed they had in their own opinions: ‘When people are unaware they are being manipulated, they tend to believe they have adopted their new thinking voluntarily,’
Meehl systematically compared the performance of humans and algorithms on a whole variety of subjects – predicting everything from students’ grades to patients’ mental health outcomes – and concluded that mathematical algorithms, no matter how simple, will almost always make better predictions than people.
It’s known to researchers as algorithm aversion. People are less tolerant of an algorithm’s mistakes than of their own – even if their own mistakes are bigger.
This is entirely true, autonomous vehicles are over 99% less likely to cause an accident than humans, but think about the backlash received when Tesla cars have caused an accident. It is weird to me how ready people are to use this as an argument against delegating functions like driving to AI.
Palantir Technologies is one of the most successful Silicon Valley start-ups of all time.
Palantir is just one example of a new breed of companies whose business is our data. And alongside the analysts, there are also the data brokers: companies who buy and collect people’s personal information and then resell it or share it for profit. Acxiom, Corelogic, Datalogix, eBureau – a swathe of huge companies you’ve probably never directly interacted with, that are none the less continually monitoring and analysing your behaviour.
And that is where we start to stray very far over the creepy line. When private, sensitive information about you, gathered without your knowledge, is then used to manipulate you. Which, of course, is precisely what happened with the British political consulting firm Cambridge Analytica.
But the advertisers aren’t injecting their messages straight into the minds of a passive audience. We’re not sitting ducks. We’re much better at ignoring advertising or putting our own spin on interpreting propaganda than the people sending those messages would like us to be.
Trump won Pennsylvania by 44,000 votes out of six million cast, Wisconsin by 22,000, and Michigan by 11,000, perhaps margins of less than 1 per cent might be all you need.24
All around the world, people have free and easy access to instant global communication networks, the wealth of human knowledge at their fingertips, up-to-the-minute information from across the earth, and unlimited usage of the most remarkable software and technology, built by private companies, paid for by adverts.
It’s known as Sesame Credit, a citizen scoring system used by the Chinese government.
Apple has now built ‘intelligent tracking prevention’ into the Safari browser.
Europe might be ahead of the curve, but there is a global trend that is heading in the right direction.
Whenever we use an algorithm – especially a free one – we need to ask ourselves about the hidden incentives. Why is this app giving me all this stuff for free? What is this algorithm really doing? Is this a trade I’m comfortable with? Would I be better off without it?
Because the algorithm’s predictions are based on the patterns it learns from the data, a random forest is described as a machine-learning algorithm, which comes under the broader umbrella of artificial intelligence.
The researchers argued that, whichever way you use it, their algorithm vastly outperforms the human judge. And the numbers back them up.
These two kinds of error, false positive and false negative, are not unique to recidivism. They’ll crop up repeatedly throughout this book. Any algorithm that aims to classify can be guilty of these mistakes.
The algorithm’s false positives were disproportionately black.
Weber’s Law states that the smallest change in a stimulus that can be perceived, the so-called ‘Just Noticeable Difference’, is proportional to the initial stimulus.
And yet, instead of adding a few months on, judges will jump to the next noticeably different sentence length, which in this case is 25 years.58
One London-based defence lawyer I spoke to told me that his role in the courtroom was to exploit the uncertainty in the system, something that the algorithm would make more difficult. ‘The more predictable the decisions get, the less room there is for the art of advocacy.’
want someone to use a reasoned strategy. We want to keep judicial discretion, as though it is something so holy.
Andy Beck,8 a Harvard pathologist and founder of PathAI, a company created in 2016 that creates algorithms to classify biopsy slides.
The trick is to shift away from the rule-based paradigm and use something called a ‘neural network’.11 You can imagine a neural network as an enormous mathematical structure that features a great many knobs and dials.
But with every picture you feed into it, you tweak those knobs and dials. Slowly, you train it.
Ninety per cent of the nuns who went on to develop Alzheimer’s had ‘low linguistic ability’ as young women, while only 13 per cent of the nuns who maintained cognitive ability into old age got a ‘low idea density’ score in their essays.
lack of a single, connected medical history meant that it was impossible for any individual doctor to fully understand the severity of her condition.
Unlike cameras, lasers can measure distance. Vehicles that use a system called LiDAR (Light Detection and Ranging, first used at the second DARPA Grand Challenge
the camera, the LiDAR, the radar – can do enough to understand what’s going on around a vehicle. The trick to successfully building a driverless car is combining them.
This is all Bayes’ theorem does: offers a systematic way to update your belief in a hypothesis on the basis of the evidence.
challenge how we feel about an algorithm making a value judgement on our own, and others’, lives.
Build a machine to improve human performance, she explained, and it will lead – ironically – to a reduction in human ability.
The good still outweighs the bad. Driving remains one of the biggest causes of avoidable deaths in the world. If the technology is remotely capable of reducing the number of fatalities on the roads overall, you could argue that it would be unethical not to roll it out.