Hello World: How to be Human in the Age of the Machine
Rate it:
Read between March 22 - March 27, 2023
2%
Flag icon
That symbolic victory, of machine over man, which in many ways marked the start of the algorithmic age, was down to far more than sheer raw computing power. In order to beat Kasparov, Deep Blue had to understand him not simply as a highly efficient processor of brilliant chess moves, but as a human being.
3%
Flag icon
It was a strange defeat. Kasparov was more than capable of working his way out of those positions on the board, but he had underestimated the ability of the algorithm and then allowed himself to be intimidated by it.
3%
Flag icon
They are what makes computer science an actual science, and in the process have fuelled many of the most miraculous modern achievements made by machines.
3%
Flag icon
Prioritization: making an ordered list
3%
Flag icon
Google Search predicts the page you’re looking for by ranking one result over another. Netflix suggests which films you might like to watch next. Your TomTom selects your fastest route. All use a mathematical process to order the vast array of possible choices.
3%
Flag icon
Classification: picking a category
4%
Flag icon
For some, the idea of an algorithm working without explicit instructions is a recipe for disaster. How can we control something we don’t understand? What if the capabilities of sentient, super-intelligent machines transcend those of their makers? How will we ensure that an AI we don’t understand and can’t control isn’t working against us?
5%
Flag icon
Although AI has come on in leaps and bounds of late, it is still only ‘intelligent’ in the narrowest sense of the word. It would probably be more useful to think of what we’ve been through as a revolution in computational statistics than a revolution in intelligence. I know that makes it sound a lot less sexy (unless you’re really into statistics), but it’s a far more accurate description of how things currently stand.
5%
Flag icon
There’s a moral somewhere in this story. Although he probably felt a little foolish at the time, in ignoring the information in front of his eyes (like seeing a sheer drop out of the car window) and attributing greater intelligence to an algorithm than it deserved, Jones was in good company. After all, Kasparov had fallen into the same trap some twelve years earlier. And, in much quieter but no less profound ways, it’s a mistake almost all of us are guilty of making, perhaps without even realizing.
6%
Flag icon
But there’s a distinction that needs making here. Because trusting a usually reliable algorithm is one thing. Trusting one without any firm understanding of its quality is quite another.
8%
Flag icon
Both of these life-or-death scenarios, Alton Towers and Petrov’s alarm, serve as dramatic illustrations of a much deeper dilemma. In the balance of power between human and algorithm, who – or what – should have the final say?
8%
Flag icon
Meehl systematically compared the performance of humans and algorithms on a whole variety of subjects – predicting everything from students’ grades to patients’ mental health outcomes – and concluded that mathematical algorithms, no matter how simple, will almost always make better predictions than people.
8%
Flag icon
It’s known to researchers as algorithm aversion. People are less tolerant of an algorithm’s mistakes than of their own – even if their own mistakes are bigger.
14%
Flag icon
There is an asymmetry in how we view the power of targeted political adverts. We like to think of ourselves as independently minded and immune to manipulation, and yet imagine others – particularly those of a different political persuasion – as being fantastically gullible. The reality is probably something in between.
18%
Flag icon
Because – remarkably – with an algorithm as part of the process, both consistency and individualized justice can be guaranteed. No one needs to choose between them.
23%
Flag icon
This is a vitally important point to pick up on. If we threw the algorithms away, what kind of a justice system would remain? Because inconsistency isn’t the only flaw from which judges have been shown to suffer.
24%
Flag icon
Put simply, Weber’s Law states that the smallest change in a stimulus that can be perceived, the so-called ‘Just Noticeable Difference’, is proportional to the initial stimulus.
28%
Flag icon
trained an algorithm to distinguish between photos of wolves and pet huskies.
32%
Flag icon
Imagine you go to the doctor complaining of unintentional weight loss and stomach aches, plus a bit of heartburn for good measure. In analogy with playing Jeopardy, the challenge is to find potential diagnoses (responses) that might explain the symptoms (clues), look for further evidence on each and update the confidence in a particular answer as more information becomes available. Doctors call this differential diagnosis. Mathematicians call it Bayesian inference.fn1
33%
Flag icon
Asthma is not usually a fatal condition, and yet 1,200 people across the UK die from it every year, 26 of whom are children.
33%
Flag icon
There is a stark contrast between the rich and detailed datasets owned by data brokers and the sparse and disconnected datasets found in healthcare.
39%
Flag icon
Because, thankfully, there is a route through all of this chaos – a way to make sensible guesses in a messy world. It all comes down to a phenomenally powerful mathematical formula, known as Bayes’ theorem.
39%
Flag icon
Yet at its heart the idea is extraordinarily simple. So simple, in fact, that you might initially think it’s just stating the obvious.
39%
Flag icon
The point is, each new observation adds to your overall assessment. This is all Bayes’ theorem does: offers a systematic way to update your belief in a hypothesis on the basis of the evidence.30
40%
Flag icon
By providing a mechanism to measure your belief in something, Bayes allows you to draw sensible conclusions from sketchy observations, from messy, incomplete and approximate data – even from ignorance.
42%
Flag icon
Because there’s another layer of difficulty to contend with when trying to build that sci-fi fantasy of a go-anywhere, do-anything, steering-wheel-free driverless car, and it’s one that goes well beyond the technical challenge. A fully autonomous car will also have to deal with the tricky problem of people.
43%
Flag icon
But there’s a hidden danger in building an automated system that can safely handle virtually every issue its designers can anticipate. If a pilot is only expected to take over in exceptional circumstances, they’ll no longer maintain the skills they need to operate the system themselves. So they’ll have very little experience to draw on to meet the challenge of an unanticipated emergency.
43%
Flag icon
Twenty-six years before the Air France crash, in 1983, the psychologist Lisanne Bainbridge wrote a seminal essay on the hidden dangers of relying too heavily on automated systems.48 Build a machine to improve human performance, she explained, and it will lead – ironically – to a reduction in human ability.
45%
Flag icon
By now, we know that humans are really good at understanding subtleties, at analysing context, applying experience and distinguishing patterns. We’re really bad at paying attention, at precision, at consistency and at being fully aware of our surroundings. We have, in short, precisely the opposite set of skills to algorithms.
47%
Flag icon
Just like the rest of us, criminals tend to stick to areas they are familiar with. They operate locally. That means that even the most serious of crimes will probably be carried out close to where the offender lives.
47%
Flag icon
And so it was with Rossmo’s algorithm for Operation Lynx – the hunt for the serial rapist. The team now had the locations of five separate crimes, plus several places where a stolen credit card had been used by the attacker to buy alcohol, cigarettes and a video game.
47%
Flag icon
Barwell was a 42-year-old married man and father of four, who had been in jail for armed robbery during the hiatus in the attacks.
48%
Flag icon
lot of people would think twice about riding the New York City subway in the 1980s. It wasn’t a nice place. Graffiti covered every surface, the cars stank of stale urine, and the platforms were rife with drug use, theft and robbery.
48%
Flag icon
Bratton had a clever idea of his own. He knew that people begging, urinating and jumping the turnstiles were a big problem on the subway. He decided to focus police attention on addressing those minor misdemeanours rather than the much more serious crimes of robbery and murder that were also at epidemic levels below ground.
49%
Flag icon
Burglars also have something in common with the serial murderers and rapists that Rossmo studied: they tend to prefer sticking to areas they’re familiar with. We now know you’re more likely to be burgled if you live on a street that a burglar regularly uses, say on their way to work or school.
50%
Flag icon
You might well have come across PredPol already. It’s been the subject of thousands of news articles since its launch in 2011, usually under a headline referencing the Tom Cruise film Minority Report. It’s become like the Kim Kardashian of algorithms: extremely famous, heavily criticized in the media, but without anyone really understanding what it does.
60%
Flag icon
Others still, Immanuel Kant among them, have said the truth is something in between. That our judgements of beauty are not wholly subjective, nor can they be entirely objective. They are sensory, emotional and intellectual all at once – and, crucially, can change over time depending on the state of mind of the observer.
61%
Flag icon
This had certainly been the opinion of Douglas Hofstadter, the cognitive scientist and author who had organized the concert in the first place. A few years earlier, in his 1979 Pulitzer Prize winning book Gödel, Escher, Bach, Hofstadter had taken a firm stance on the matter: