You Look Like a Thing and I Love You: How Artificial Intelligence Works and Why It's Making the World a Weirder Place
Rate it:
Open Preview
Kindle Notes & Highlights
3%
Flag icon
The Five Principles of AI Weirdness: • The danger of AI is not that it’s too smart but that it’s not smart enough. • AI has the approximate brainpower of a worm. • AI does not really understand the problem you want it to solve. • But: AI will do exactly what you tell it to. Or at least it will try its best. • And AI will take the path of least resistance.
10%
Flag icon
Even without understanding what bias is, AI can still manage to be biased. After all, many AIs learn by copying humans. The question they’re answering is not “What is the best solution?” but “What would the humans have done?”
10%
Flag icon
The problem with designing an AI to screen candidates for us: we aren’t really asking the AI to identify the best candidates. We’re asking it to identify the candidates that most resemble the ones our human hiring managers liked in the past.
10%
Flag icon
That might be okay if the human hiring managers made great decisions. But most US companies have a diversity problem, particularly among managers and particularly in the way that hiring managers evaluate resumes and interview candidates. All else being equal, resumes with white-male-sounding names are more likely to get interviews than those with female-and/or minority-sounding names.5 Even hiring managers who are female and/or members of a minority themselves tend to unconsciously favor white male candidates.
11%
Flag icon
If you give a job-candidate-screening AI biased data to learn from (which you almost certainly did, unless you did a lot of work to scrub bias from the data), then you also give it a convenient shortcut to improve its accuracy at predicting the “best” candidate: prefer white men.
14%
Flag icon
But, unfortunately, consistent doesn’t mean unbiased. It’s very possible for an algorithm to be consistently unfair, especially if it learned, as many AIs do, by copying humans.
20%
Flag icon
Matching the surface qualities of human speech while lacking any deeper meaning is a hallmark of neural-net-generated text.
26%
Flag icon
Note: never volunteer to test the early stages of a machine learning algorithm.
43%
Flag icon
Humans do weird things to datasets.
47%
Flag icon
They trained an RNN on a dataset of more than one hundred thousand emails containing sensitive employee information collected by the US government as part of their investigation into the Enron Corporation (yes, that Enron) and were able to extract multiple Social Security numbers and credit card numbers from the neural net’s predictions. It had memorized the information in such a way that it could be recovered by any user—even without access to the original dataset. This problem is known as unintentional memorization and can be prevented with appropriate security measures—or by keeping ...more
51%
Flag icon
“I’ve taken to imagining [AI] as a demon that’s deliberately misinterpreting your reward and actively searching for the laziest possible local optima. It’s a bit ridiculous, but I’ve found it’s actually a productive mindset to have,” writes Alex Irpan, an AI researcher at Google.5
60%
Flag icon
Sometimes I think the surest sign that we’re not living in a simulation is that if we were, some organism would have learned to exploit its glitches.
65%
Flag icon
In 2018 Reuters reported that Amazon had discontinued use of the tool it had been trialing for prescreening job applicants when the company’s tests revealed that the AI was discriminating against women. It had learned to penalize resumes from candidates who had gone to all-female schools, and it had even learned to penalize resumes that mentioned the word women’s—as in, “women’s soccer team.”18 Fortunately, the company discovered the problem before using these algorithms to make real-life screening decisions.
65%
Flag icon
Once the Amazon engineers discovered the bias in their resume-screening tool, they tried to remove it by deleting the female-associated terms from the words the algorithm would consider. Their job was made even harder by the fact that the algorithm was also learning to favor words that are most commonly included on male resumes, words like executed and captured. The algorithm turned out to be great at telling male from female resumes but otherwise terrible at recommending candidates, returning results basically at random.
66%
Flag icon
Treating a decision as impartial just because it came from an AI is known sometimes as mathwashing or bias laundering.
67%
Flag icon
Because AIs are so prone to unknowingly solving the wrong problem, breaking things, or taking unfortunate shortcuts, we need people to make sure their “brilliant solution” isn’t a head-slapper. And those people will need to be familiar with the ways AIs tend to succeed or go wrong. It’s a bit like checking the work of a colleague—a very, very strange colleague.
72%
Flag icon
It’s easiest to design an adversarial attack when you have access to the inner workings of the algorithm. But it turns out that you can fool a stranger’s algorithm, too. Researchers at LabSix have found that they can design adversarial attacks even when they don’t have access to the inner connections of the neural network. Using a trial-and-error method, they could fool neural nets when they had access only to their final decisions and even when they were allowed only a limited number of tries (100,000, in this case).
73%
Flag icon
People might even be able to set up their own adversarial attacks by poisoning publicly available datasets. There are public datasets, for example, to which people can contribute samples of malware to train anti-malware AI. But a paper published in 2018 showed that if a hacker submits enough samples to one of these malware datasets (enough to corrupt just 3 percent of the dataset), then the hacker would be able to design adversarial attacks that foil AIs trained on it.
74%
Flag icon
Resume screening services might also be susceptible to adversarial attack—not by hackers with algorithms of their own but by people trying to alter their resumes in subtle ways to make it past the AI. The Guardian reports: “One HR employee for a major technology company recommends slipping the words ‘Oxford’ or ‘Cambridge’ into a CV in invisible white text, to pass the automated screening.”
77%
Flag icon
The Chinese government is reportedly taking advantage of this8 with its nationwide surveillance system. Experts agree that there’s no facial recognition system that could accurately identify the thirty million people China has on its watch lists. In 2018 the New York Times reported that the government was still doing much of its facial recognition the old-fashioned way, using humans to look through sets of photos and make matches. What they tell the public, however, is that they’re using advanced AI. They’d like people to believe that a nationwide surveillance system is already capable of ...more
78%
Flag icon
Human review doesn’t necessarily solve the problem of a biased algorithm, since the bias likely came from humans in the first place. And this particular AI doesn’t tell its customers how it came to its decisions, and it quite possibly doesn’t tell its programmers, either.
78%
Flag icon
There may be similar problems with AIs that screen job candidates, like the Amazon-resume-screening AI that learned to penalize female candidates. Companies that offer AI-powered candidate screening point to case studies of clients who have significantly increased the diversity of their hires after using AI.
78%
Flag icon
An AI-powered job screener could help increase diversity even if it recommended candidates entirely at random, if that’s already better than the racial and/or gender bias in typical company hiring.
80%
Flag icon
In that sense, practical machine learning ends up being a bit of a hybrid between rules-based programming, in which a human tells a computer step-by-step how to solve a problem, and open-ended machine learning, in which an algorithm has to figure everything out.
80%
Flag icon
Maybe there’s a rare but catastrophic bug that develops, like the one that affected Siri for a brief period of time, causing her to respond to users saying “Call me an ambulance” with “Okay, I’ll call you ‘an ambulance’ from now on.”
80%
Flag icon
As I mentioned in chapter 7, in January 2019, New York State issued a letter requiring life insurance companies to prove that their AI systems do not discriminate on the basis of race, religion, country of origin, or other protected classes.
81%
Flag icon
Remember Amazon’s sexist resume-screening AI? The company discovered the problem before using the AI in the real world and told us about it as a cautionary tale.
82%
Flag icon
A 2018 paper showed that two machine learning algorithms in a situation like the book-pricing setup above, each given the task of setting a price that maximizes profits, can learn to collude with each other in a way that’s both highly sophisticated and highly illegal. They can do this without explicitly being taught to collude and without communicating directly with each other—somehow, they manage to set up a price-fixing scheme just by observing each other’s prices.
83%
Flag icon
An AI’s decisions can be based on complex relationships between several variables, some of which may be proxies for information that it’s not supposed to have, like gender or race. That adds a layer of obfuscation that may—intentionally or not—be allowing it to get away with breaking laws.
83%
Flag icon
Can AI also be fairer? Potentially. An AI-powered system, at least, can be tested for fairness by running lots of test decisions and looking for statistical correlations that shouldn’t be there. By carefully adjusting the training data to make its statistics match the world as it should be rather than the world as it is, it would be possible in many cases to train an AI whose decisions are fair—at least, much fairer than your average human’s.
84%
Flag icon
Like collecting the datasets, training the AI is an artistic act.
85%
Flag icon
There’s every reason to be optimistic about AI and every reason to be cautious. It all depends on how well we use it.