More on this book
Community
Kindle Notes & Highlights
Getting a driverless car to answer the question ‘Where am I?’
‘What’s around me?’
‘What should I do?’
probabilistic inference.
Bayesian inference
A fully autonomous car will also have to deal with the tricky problem of people.
Waymo cars aren’t allowed to go just anywhere: they’re ‘geo-fenced’ into a small, pre-defined area.
Driverless technology is categorized using six different levels: from level 0 – no automation whatsoever – up to to level 5 – the fully autonomous fantasy. In between, they range from cruise control (level 2) to geo-fenced autonomous vehicles (level 4) and are colloquially referred to as level 1: feet off; level 2: hands off; level 3: eyes off; level 4: brain off.
Lisanne Bainbridge wrote a seminal essay on the hidden dangers of relying too heavily on automated systems.
Build a machine to improve human performance, she explained, and it will lead – ironically – to a reduction in human ability.
Until we get to full autonomy, the car will still sometimes unexpectedly hand back control to the driver. Will we be able to remember instinctively what to do? And will teenage drivers of the future ever have the chance to master the requisite skills in the first place?
Joshua Brown, who died at the wheel of his Tesla in 2016, had been using Autopilot mode for 37½ minutes when his car hit a truck that was crossing his lane. The investigation by the National Highway Traffic Safety Administration concluded that Brown had not been looking at the road at the time of the crash.
On 18 March 2018, an Uber self-driving vehicle fatally struck a pedestrian. Video footage from inside the car showed that the ‘human monitor’ sitting behind the wheel was looking away from the road in the moments before the collision.57
humans are really good at understanding subtleties, at analysing context, applying experience and distinguishing patterns. We’re really bad at paying attention, at precision, at consistency and at being fully aware of our surroundings. We have, in short, precisely the opposite set of skills to algorithms.
as you move further and further away from the scene of the crime, the chance of finding your perpetrator’s home slowly drops away,8 an effect known to criminologists as ‘distance decay’.
serial offenders are unlikely to target victims who live very close by, to avoid unnecessary police attention on their doorsteps or being recognized by neighbours. The result is known as a ‘buffer zone’ which encircles the offender’s home, a region in which there’ll be a very low chance of their committing a crime.9
two key patterns – distance decay and the buffer zone – hidden among the geography of...
This highlight has been truncated due to consecutive passage length restrictions.
It’s as if the serial offender is a rotating lawn sprinkler. Just as it would be difficult to predict where the very next drop of water is going to fall, you can’t foresee where your criminal will attack next. But once the water has been spraying for a while and many drops have fallen, it’s relatively easy to observe from the pattern of the drops where the lawn sprinkler is likely to be situated.
turnstiles
what exactly is going on right now in your immediate local neighbourhood. It’s known as the ‘boost’.
no matter where you live, you’re most at risk in the days right after you’ve just been burgled. In fact, your chances of being targeted can increase twelvefold at this time.
boost effect doesn’t just apply to you. Researchers have found that the chances of your neighbours being burgled immediately after you will also be boosted,
As you get further away from the original spark, the boost gets fainter and fainter; and it fades away over time, too, until after two months – unless a new crime re-boosts the same street – it will have disappeared entirely.26
PredPol (or PREDictive POLicing)
It can only predict the risk of future events, not the events themselves –
PredPol calculates the odds. It acts like a tipster, highlighting the streets and areas that are that evening’s ‘favourites’ in the form of little red squares on a map.
dodgy
the concerns raised around bias and discrimination are legitimate. And for me, these questions are too fundamental to a just society for us simply to accept assurances that law enforcement agencies will use them in a fair way. It’s one of many examples of how badly we need independent experts and a regulatory body to ensure that the good an algorithm does outweighs the harm.
And the potential harms go beyond prediction.
there is a real danger that algorithms can add an air of authority to an incorrect result. And the consequences here can be dramatic. Just because the c...
This highlight has been truncated due to consecutive passage length restrictions.
doppelgänger
With no strict definition of similarity, you can’t measure how different two faces are and there is no threshold at which we can say that two faces are identical.
DNA sequence
The more markers you use, the lower your chances of a mismatch, and so, by choosing the number of markers to test, every judicial system in the world has complete power to decide on the threshold of doubt they’re willing to tolerate.
‘Lack of statistics means: conclusions are ultimately opinion based.’
researchers realized recently that, rather than having to rely on humans to decide which patterns will work best, you can get the algorithm to learn the best combinations for itself, by using trial and error on a vast dataset of faces. Typically, it’s done using neural networks.
This kind of algorithm is where the big recent leaps forward in performance and accuracy have come in. That performance, though, comes with a cost. It isn’t always clear precisely how the algorithm decides whether one face is like another.
(There’s even a suggestion that a citizen’s minor misdemeanours in the physical world, like littering, will form part of their Sesame Credit score
Here’s the problem: the chances of misidentification multiply dramatically with the number of faces in the pile. The more faces the algorithm searches through, the more chance it has of finding two faces that look similar.
if you’re searching for a particular criminal in a digital line-up of millions, based on those numbers, the best-case scenario is that you won’t find the right person one in six times.
In Ontario, Canada, for instance, people with a gambling addiction can voluntarily place themselves on a list that bars them from entering a casino. If their resolve wavers, their face will be flagged by recognition algorithms, prompting casino staff to politely ask them to leave.
True, these algorithms aren’t alone in their slightly shaky statistical foundations. Fingerprinting has no known error rate either,74 nor do bite mark analysis, blood spatter patterning75 or ballistics.76 In fact, according to a 2009 paper by the US National Academy of Sciences, none of the techniques of forensic science apart from DNA testing can ‘demonstrate a connection between evidence and a specific individual or source’.77 None the less, no one can deny that they have all proved to be incredibly valuable police tools – just as long as the evidence they generate isn’t relied on too
...more
absconded
every way you turn you’ll find algorithms that show great promise in one regard, but can be deeply worrying in another.
the urgent need for algorithmic regulation is never louder or clearer than in the case of crime, where the very existence of these systems raises serious questions without easy answers. Somehow, we’re going to have to confront these difficult dilemmas.
Do we dismiss any mathematical system with built-in biases, or proven capability of error, knowing that in doing so we’d be holding our algorithms to a higher standard than the human system we’re left with? And how biased is too biased? At what point do you prioritize the victims of preventable crimes over the victims of the algorithm?
brooding
Music Lab,
Take 52Metro, a Milwaukee punk band, whose song ‘Lockdown’ was wildly popular in one world, where it finished up at the very top of the chart, and yet completely bombed in another world, ranked 40th out of 48 tracks. Exactly the same song, up against exactly the same list of other songs; it was just that in this particular world, 52Metro never caught on.3 Success, sometimes, was a matter of luck.