More on this book
Community
Kindle Notes & Highlights
by
Hannah Fry
Read between
December 3 - December 28, 2019
No object or algorithm is ever either good or evil in itself. It’s how they’re used that matters. GPS was invented to launch nuclear missiles and now helps deliver pizzas.
Because the future doesn’t just happen. We create it.
For some, the idea of an algorithm working without explicit instructions is a recipe for disaster. How can we control something we don’t understand? What if the capabilities of sentient, super-intelligent machines transcend those of their makers? How will we ensure that an AI we don’t understand and can’t control isn’t working against us?
Although AI has come on in leaps and bounds of late, it is still only ‘intelligent’ in the narrowest sense of the word. It would probably be more useful to think of what we’ve been through as a revolution in computational statistics than a revolution in intelligence.
In my years working as a mathematician with data and algorithms, I’ve come to believe that the only way to objectively judge whether an algorithm is trustworthy is by getting to the bottom of how it works. In my experience, algorithms are a lot like magical illusions. At first they appear to be nothing short of actual wizardry, but as soon as you know how the trick is done, the mystery evaporates.
Meehl systematically compared the performance of humans and algorithms on a whole variety of subjects – predicting everything from students’ grades to patients’ mental health outcomes – and concluded that mathematical algorithms, no matter how simple, will almost always make better predictions than people.
algorithm aversion. People are less tolerant of an algorithm’s mistakes than of their own – even if their own mistakes are bigger.
If we’re going to get the most out of technology, we’re going to need to work out a way to be a bit more objective. We need to learn from Kasparov’s mistake and acknowledge our own flaws, question our gut reactions and be a bit more aware of our feelings towards the algorithms around us. On the flip side, we should take algorithms off their pedestal, examine them a bit more carefully and ask if they’re really capable of doing what they claim. That’s the only way to decide if they deserve the power they’ve been given.
The experimenters suppressed any friends’ posts that contained positive words, and then did the same with those containing negative words, and watched to see how the unsuspecting subjects would react in each case. Users who saw less negative content in their feeds went on to post more positive stuff themselves. Meanwhile, those who had positive posts hidden from their timeline went on to use more negative words themselves. Conclusion: we may think we’re immune to emotional manipulation, but we’re probably not.
That was the deal that we made. Free technology in return for your data and the ability to use it to influence and profit from you. The best and worst of capitalism in one simple swap.
Whenever we use an algorithm – especially a free one – we need to ask ourselves about the hidden incentives. Why is this app giving me all this stuff for free? What is this algorithm really doing? Is this a trade I’m comfortable with? Would I be better off without it?
The outcome is biased because reality is biased. More men commit homicides, so more men will be falsely accused of having the potential to murder.fn2
Whatever the reason for the disparity, the sad result is that rates of arrest are not the same across racial groups in the United States. Blacks are re-arrested more often than whites. The algorithm is judging them not on the colour of their skin, but on the all-too-predictable consequences of America’s historically deeply unbalanced society. Until all groups are arrested at the same rate, this kind of bias is a mathematical certainty.
But however accurate the results might be, you could argue that using algorithms as a mirror to reflect the real world isn’t always helpful, especially when the mirror is reflecting a present reality that only exists because of centuries of bias. Now, if it so chose, Google could subtly tweak its algorithm to prioritize images of female or non-white professors over others, to even out the balance a little and reflect the society we’re aiming for, rather than the one we live in.
The fact is, our minds just aren’t built for robust, rational assessment of big, complicated problems. We can’t easily weigh up the various factors of a case and combine everything together in a logical manner while blocking the intuitive System 1 from kicking in and taking a few cognitive short cuts.
If you’ve ever convinced yourself that an extremely expensive item of clothing was good value just because it was 50 per cent off (as I regularly do), then you’ll know all about the so-called anchoring effect. We find it difficult to put numerical values on things, and are much more comfortable making comparisons between values than just coming up with a single value out of the blue.
time seems to speed up as you get older. It happens because humans’ senses work in relative terms rather than in absolute values. We don’t perceive each year as a fixed period of time; we experience each new year as a smaller and smaller fraction of the life we’ve lived.
Weber’s Law states that the smallest change in a stimulus that can be perceived, the so-called ‘Just Noticeable Difference’, is proportional to the initial stimulus.
Rather than just closing our eyes and hoping for the best, algorithms require a clear, unambiguous idea of exactly what we want them to achieve and a solid understanding of the human failings they’re replacing.
The association was so strong that the researchers could predict which nuns might have dementia just by reading their letters. Ninety per cent of the nuns who went on to develop Alzheimer’s had ‘low linguistic ability’ as young women, while only 13 per cent of the nuns who maintained cognitive ability into old age got a ‘low idea density’ score in their essays.
Something worth remembering whenever you send off for a commercial genetic report: you’re not using the product; you are the product.
In whatever facet of life an algorithm is introduced, there will always be some kind of a balance. Between privacy and public good. Between the individual and the population. Between different challenges and priorities. It isn’t easy to find a path through the tangle of incentives, even when the clear prize of better healthcare for all is at the end.
This is all Bayes’ theorem does: offers a systematic way to update your belief in a hypothesis on the basis of the evidence.30 It accepts that you can’t ever be completely certain about the theory you’re considering, but allows you to make a best guess from the information available.
Twenty-six years before the Air France crash, in 1983, the psychologist Lisanne Bainbridge wrote a seminal essay on the hidden dangers of relying too heavily on automated systems.48 Build a machine to improve human performance, she explained, and it will lead – ironically – to a reduction in human ability.
By now, we know that humans are really good at understanding subtleties, at analysing context, applying experience and distinguishing patterns. We’re really bad at paying attention, at precision, at consistency and at being fully aware of our surroundings. We have, in short, precisely the opposite set of skills to algorithms.
as time goes on, autonomous driving will have a few lessons to teach us that apply well beyond the world of motoring. Not just about the messiness of handing over control, but about being realistic in our expectations of what algorithms can do.
That means that even the most serious of crimes will probably be carried out close to where the offender lives. And, as you move further and further away from the scene of the crime, the chance of finding your perpetrator’s home slowly drops away,8 an effect known to criminologists as ‘distance decay’.
On the other hand, serial offenders are unlikely to target victims who live very close by, to avoid unnecessary police attention on their doorsteps or being recognized by neighbours. The result is known as a ‘buffer zone’ which encircles the offender’s home, a region in which there’ll be a very low chance of their committing a crime.
It’s one of many examples of how badly we need independent experts and a regulatory body to ensure that the good an algorithm does outweighs the harm.
When it comes to identifying people from photos – to quote a presentation given by an FBI forensics unit – ‘Lack of statistics means: conclusions are ultimately opinion based.’
In part, this comes down to deciding, as a society, what we think success looks like. What is our priority? Is it keeping crime as low as possible? Or preserving the freedom of the innocent above all else? How much of one would you sacrifice for the sake of the other?
It’s a phenomenon known to psychologists as social proof. Whenever we haven’t got enough information to make decisions for ourselves, we have a habit of copying the behaviour of those around us. It’s why theatres sometimes secretly plant people in the audience to clap and cheer at the right times. As soon as we hear others clapping, we’re more likely to join in.
Conclusion: the market isn’t locked into a particular state. Both luck and quality have a role to play.
Just because something is successful, that doesn’t mean it’s of a high quality.
Unfortunately, in trying to find an objective measure of quality, we come up against a deeply contentious philosophical question that dates back as far as Plato. One that has been the subject of debate for more than two millennia. How do you judge the aesthetic value of art?
That our judgements of beauty are not wholly subjective, nor can they be entirely objective. They are sensory, emotional and intellectual all at once – and, crucially, can change over time depending on the state of mind of the observer.
Good artists borrow; great artists steal – Pablo Picasso
There is no such thing as a new idea. It is impossible. We simply take a lot of old ideas and put them into a sort of mental kaleidoscope. We give them a turn and they make new and curious combinations. We keep on turning and making new combinations indefinitely; but they are the same old pieces of colored glass that have been in use through all the ages.
Cope, meanwhile, has a very simple definition for creativity, which easily encapsulates what the algorithms can do: ‘Creativity is just finding an association between two things which ordinarily would not seem related.’
After all, people form emotional relationships with objects that don’t love them back – like treasured childhood teddy bears or pet spiders.
Everywhere you look – in the judicial system, in healthcare, in policing, even online shopping – there are problems with privacy, bias, error, accountability and transparency that aren’t going to go away easily.
So, imagine for a moment: what if we accepted that perfection doesn’t exist? Algorithms will make mistakes. Algorithms will be unfair. That should in no way distract us from the fight to make them more accurate and less biased wherever we can – but perhaps acknowledging that algorithms aren’t perfect, any more than humans are, might just have the effect of diminishing any assumption of their authority.
I think this is the key to a future where the net overall effect of algorithms is a positive force for society. And it’s only right that it’s a job that rests squarely on our shoulders. Because one thing is for sure. In the age of the algorithm, humans have never been more important.
There’s a trick you can use to spot the junk algorithms. I like to call it the Magic Test. Whenever you see a story about an algorithm, see if you can swap out any of the buzzwords, like ‘machine learning’, ‘artificial intelligence’ and ‘neural network’, and swap in the word ‘magic’. Does everything still make grammatical sense? Is any of the meaning lost? If not, I’d be worried that something smells quite a lot like bullshit. Because I’m afraid – long into the foreseeable future – we’re not going to ‘solve world hunger with magic’ or ‘use magic to write the perfect screenplay’ any more than
...more