Hello World: Being Human in the Age of Algorithms
Rate it:
Kindle Notes & Highlights
40%
Flag icon
Getting a driverless car to answer the question ‘Where am I?’
40%
Flag icon
‘What’s around me?’
40%
Flag icon
‘What should I do?’
41%
Flag icon
probabilistic inference.
41%
Flag icon
Bayesian inference
41%
Flag icon
A fully autonomous car will also have to deal with the tricky problem of people.
42%
Flag icon
Waymo cars aren’t allowed to go just anywhere: they’re ‘geo-fenced’ into a small, pre-defined area.
42%
Flag icon
Driverless technology is categorized using six different levels: from level 0 – no automation whatsoever – up to to level 5 – the fully autonomous fantasy. In between, they range from cruise control (level 2) to geo-fenced autonomous vehicles (level 4) and are colloquially referred to as level 1: feet off; level 2: hands off; level 3: eyes off; level 4: brain off.
43%
Flag icon
The penny dropped for the captain.
Erhan
Jeton dustu.
43%
Flag icon
Lisanne Bainbridge wrote a seminal essay on the hidden dangers of relying too heavily on automated systems.
43%
Flag icon
Build a machine to improve human performance, she explained, and it will lead – ironically – to a reduction in human ability.
43%
Flag icon
Until we get to full autonomy, the car will still sometimes unexpectedly hand back control to the driver. Will we be able to remember instinctively what to do? And will teenage drivers of the future ever have the chance to master the requisite skills in the first place?
44%
Flag icon
Joshua Brown, who died at the wheel of his Tesla in 2016, had been using Autopilot mode for 37½ minutes when his car hit a truck that was crossing his lane. The investigation by the National Highway Traffic Safety Administration concluded that Brown had not been looking at the road at the time of the crash.
44%
Flag icon
On 18 March 2018, an Uber self-driving vehicle fatally struck a pedestrian. Video footage from inside the car showed that the ‘human monitor’ sitting behind the wheel was looking away from the road in the moments before the collision.57
45%
Flag icon
humans are really good at understanding subtleties, at analysing context, applying experience and distinguishing patterns. We’re really bad at paying attention, at precision, at consistency and at being fully aware of our surroundings. We have, in short, precisely the opposite set of skills to algorithms.
47%
Flag icon
as you move further and further away from the scene of the crime, the chance of finding your perpetrator’s home slowly drops away,8 an effect known to criminologists as ‘distance decay’.
47%
Flag icon
serial offenders are unlikely to target victims who live very close by, to avoid unnecessary police attention on their doorsteps or being recognized by neighbours. The result is known as a ‘buffer zone’ which encircles the offender’s home, a region in which there’ll be a very low chance of their committing a crime.9
47%
Flag icon
two key patterns – distance decay and the buffer zone – hidden among the geography of...
This highlight has been truncated due to consecutive passage length restrictions.
47%
Flag icon
It’s as if the serial offender is a rotating lawn sprinkler. Just as it would be difficult to predict where the very next drop of water is going to fall, you can’t foresee where your criminal will attack next. But once the water has been spraying for a while and many drops have fallen, it’s relatively easy to observe from the pattern of the drops where the lawn sprinkler is likely to be situated.
48%
Flag icon
turnstiles
49%
Flag icon
what exactly is going on right now in your immediate local neighbourhood. It’s known as the ‘boost’.
49%
Flag icon
no matter where you live, you’re most at risk in the days right after you’ve just been burgled. In fact, your chances of being targeted can increase twelvefold at this time.
49%
Flag icon
boost effect doesn’t just apply to you. Researchers have found that the chances of your neighbours being burgled immediately after you will also be boosted,
49%
Flag icon
As you get further away from the original spark, the boost gets fainter and fainter; and it fades away over time, too, until after two months – unless a new crime re-boosts the same street – it will have disappeared entirely.26
50%
Flag icon
PredPol (or PREDictive POLicing)
50%
Flag icon
It can only predict the risk of future events, not the events themselves –
50%
Flag icon
PredPol calculates the odds. It acts like a tipster, highlighting the streets and areas that are that evening’s ‘favourites’ in the form of little red squares on a map.
51%
Flag icon
dodgy
52%
Flag icon
the concerns raised around bias and discrimination are legitimate. And for me, these questions are too fundamental to a just society for us simply to accept assurances that law enforcement agencies will use them in a fair way. It’s one of many examples of how badly we need independent experts and a regulatory body to ensure that the good an algorithm does outweighs the harm.
52%
Flag icon
And the potential harms go beyond prediction.
52%
Flag icon
there is a real danger that algorithms can add an air of authority to an incorrect result. And the consequences here can be dramatic. Just because the c...
This highlight has been truncated due to consecutive passage length restrictions.
53%
Flag icon
doppelgänger
54%
Flag icon
With no strict definition of similarity, you can’t measure how different two faces are and there is no threshold at which we can say that two faces are identical.
54%
Flag icon
DNA sequence
54%
Flag icon
The more markers you use, the lower your chances of a mismatch, and so, by choosing the number of markers to test, every judicial system in the world has complete power to decide on the threshold of doubt they’re willing to tolerate.
54%
Flag icon
‘Lack of statistics means: conclusions are ultimately opinion based.’
54%
Flag icon
researchers realized recently that, rather than having to rely on humans to decide which patterns will work best, you can get the algorithm to learn the best combinations for itself, by using trial and error on a vast dataset of faces. Typically, it’s done using neural networks.
54%
Flag icon
This kind of algorithm is where the big recent leaps forward in performance and accuracy have come in. That performance, though, comes with a cost. It isn’t always clear precisely how the algorithm decides whether one face is like another.
55%
Flag icon
(There’s even a suggestion that a citizen’s minor misdemeanours in the physical world, like littering, will form part of their Sesame Credit score
55%
Flag icon
Here’s the problem: the chances of misidentification multiply dramatically with the number of faces in the pile. The more faces the algorithm searches through, the more chance it has of finding two faces that look similar.
55%
Flag icon
if you’re searching for a particular criminal in a digital line-up of millions, based on those numbers, the best-case scenario is that you won’t find the right person one in six times.
55%
Flag icon
In Ontario, Canada, for instance, people with a gambling addiction can voluntarily place themselves on a list that bars them from entering a casino. If their resolve wavers, their face will be flagged by recognition algorithms, prompting casino staff to politely ask them to leave.
56%
Flag icon
True, these algorithms aren’t alone in their slightly shaky statistical foundations. Fingerprinting has no known error rate either,74 nor do bite mark analysis, blood spatter patterning75 or ballistics.76 In fact, according to a 2009 paper by the US National Academy of Sciences, none of the techniques of forensic science apart from DNA testing can ‘demonstrate a connection between evidence and a specific individual or source’.77 None the less, no one can deny that they have all proved to be incredibly valuable police tools – just as long as the evidence they generate isn’t relied on too ...more
56%
Flag icon
absconded
56%
Flag icon
every way you turn you’ll find algorithms that show great promise in one regard, but can be deeply worrying in another.
56%
Flag icon
the urgent need for algorithmic regulation is never louder or clearer than in the case of crime, where the very existence of these systems raises serious questions without easy answers. Somehow, we’re going to have to confront these difficult dilemmas.
56%
Flag icon
Do we dismiss any mathematical system with built-in biases, or proven capability of error, knowing that in doing so we’d be holding our algorithms to a higher standard than the human system we’re left with? And how biased is too biased? At what point do you prioritize the victims of preventable crimes over the victims of the algorithm?
57%
Flag icon
brooding
57%
Flag icon
Music Lab,
57%
Flag icon
Take 52Metro, a Milwaukee punk band, whose song ‘Lockdown’ was wildly popular in one world, where it finished up at the very top of the chart, and yet completely bombed in another world, ranked 40th out of 48 tracks. Exactly the same song, up against exactly the same list of other songs; it was just that in this particular world, 52Metro never caught on.3 Success, sometimes, was a matter of luck.