Hello World: Being Human in the Age of Algorithms
Rate it:
Kindle Notes & Highlights
1%
Flag icon
Or Mark Zuckerberg, who, when writing the code for Facebook in his dorm room in Harvard in 2004, would never have imagined his creation would go on to be accused of helping manipulate votes in elections around the globe.5
2%
Flag icon
No object or algorithm is ever either good or evil in itself. It’s how they’re used that matters. GPS was invented to launch nuclear missiles and now helps deliver pizzas. Pop music, played on repeat, has been deployed as a torture device. And however beautifully made a garland of flowers might be, if I really wanted to I could strangle you with it.
5%
Flag icon
as that old internet joke says, the best place to hide a dead body is on the second page of Google search
7%
Flag icon
If there’s anything we can learn from this story, it’s that the human element does seem to be a critical part of the process: that having a person with the power of veto in a position to review the suggestions of an algorithm before a decision is made is the only sensible way to avoid mistakes.
7%
Flag icon
The only problem with this conclusion is that humans aren’t always that reliable either. Sometimes, like Petrov, they’ll be right to over-rule an algorithm. But often our instincts are best ignored.
7%
Flag icon
In his book, Meehl systematically compared the performance of humans and algorithms on a whole variety of subjects – predicting everything from students’ grades to patients’ mental health outcomes – and concluded that mathematical algorithms, no matter how simple, will almost always make better predictions than people.
8%
Flag icon
But there’s a paradox in our relationship with machines. While we have a tendency to over-trust anything we don’t understand, as soon as we know an algorithm can make mistakes, we also have a rather annoying habit of over-reacting and dismissing it completely, reverting instead to our own flawed judgement. It’s known to researchers as algorithm aversion. People are less tolerant of an algorithm’s mistakes than of their own – even if their own mistakes are bigger.
8%
Flag icon
This tendency of ours to view things in black and white – seeing algorithms as either omnipotent masters or a useless pile of junk – presents quite a problem in our high-tech age. If we’re going to get the most out of technology, we’re going to need to work out a way to be a bit more objective. We need to learn from Kasparov’s mistake and acknowledge our own flaws, question our gut reactions and be a bit more aware of our feelings towards the algorithms around us. On the flip side, we should take algorithms off their pedestal, examine them a bit more carefully and ask if they’re really capable ...more
14%
Flag icon
All of the above is true, but the actual effects are tiny. In the Facebook experiment, users were indeed more likely to post positive messages if they were shielded from negative news. But the difference amounted to less than one-tenth of one percentage point. Likewise, in the targeted adverts example, the makeup sold to introverts was more successful if it took into account the person’s character, but the difference it made was minuscule. A generic advert got 31 people in 1,000 to click on it. The targeted ad managed 35 in 1,000. Even that figure of 50 per cent improvement that I cited on ...more
14%
Flag icon
The methods can work, yes. But the advertisers aren’t injecting their messages straight into the minds of a passive audience. We’re not sitting ducks. We’re much better at ignoring advertising or putting our own spin on interpreting propaganda than the people sending those messages would like us to be. In the end, even with the best, most deviously micro-profiled campaigns, only a small amount of influence will leak through to the target.