You Look Like a Thing and I Love You: How Artificial Intelligence Works and Why It's Making the World a Weirder Place
Rate it:
Open Preview
3%
Flag icon
The Five Principles of AI Weirdness: • The danger of AI is not that it’s too smart but that it’s not smart enough. • AI has the approximate brainpower of a worm. • AI does not really understand the problem you want it to solve. • But: AI will do exactly what you tell it to. Or at least it will try its best. • And AI will take the path of least resistance.
4%
Flag icon
But a machine learning algorithm figures out the rules for itself via trial and error, gauging its success on goals the programmer has specified. The goal could be a list of examples to imitate, a game score to increase, or anything else. As the AI tries to reach this goal, it can discover rules and correlations that the programmer didn’t even know existed. Programming an AI is almost more like teaching a child than programming a computer.
10%
Flag icon
As leading machine learning researcher Andrew Ng put it, worrying about an AI takeover is like worrying about overcrowding on Mars.4
15%
Flag icon
This is the strategy companies use when they want to use chatbots for customer service. Rather than identify the bots as such, they rely on human politeness to keep the conversation on topics in which the bots can hold their own. After all, if there’s a chance you might be talking with a human employee, it would be rude to test them with weird off-topic questions.
17%
Flag icon
AI’s data hungriness is a big reason why the age of “big data,” where people collect and analyze huge sets of data, goes hand in hand with the age of AI.
21%
Flag icon
Some problems were tough to make progress on before we had big AI models and lots of data. AI revolutionized image recognition and language translation, making smart photo tagging and Google Translate ubiquitous. Those problems are hard for people to write down general rules for, but an AI approach can analyze lots of information and form its own rules.
22%
Flag icon
According to analysis by Mobileye (who designed the collision-avoidance system), because their system had been designed for highway driving, it had only been trained to avoid rear-end collisions. That is, it had only been trained to recognize trucks from behind, not from the side. Tesla reported that when the AI detected the truck, it recognized it as an overhead sign and decided it didn’t need to brake.
23%
Flag icon
Car companies are trying to adapt their strategies to the inevitability of mundane glitches or freak weirdness on the road. They’re looking into limiting self-driving cars to closed, controlled routes (this doesn’t necessarily solve the emu problem; they are wily) or having self-driving trucks caravan behind a lead human driver. In other words, the compromises are leading us toward solutions that look very much like mass public transportation.
24%
Flag icon
In other words, artificial neural networks are imitation brains.
24%
Flag icon
The most powerful neural networks, the ones that take months and tens of thousands of dollars’ worth of computing time to train, have far more neurons than my laptop’s neural net, some even exceeding the neuron count of a single honeybee. Looking at how the size of the world’s largest neural networks has increased over time, a leading researcher estimated in 2016 that artificial neural networks might be able to approach the number of neurons in the human brain by around 2050.1 Will this mean that AI will approach the intelligence of a human then? Probably not even close. Each neuron in the ...more
25%
Flag icon
It’s also susceptible to something we’ll call the big sandwich bug: a sandwich that contains mulch might still be rated as tasty if it contains enough good ingredients to cancel out the mulch.
25%
Flag icon
DEEP LEARNING Adding hidden layers to our neural network gets us a more sophisticated algorithm, one that’s able to judge sandwiches as more than the sum of their ingredients. In this chapter, we’ve only added one hidden layer, but real-world neural networks often have several. Each new layer means a new way to combine the insights from the previous layer—at higher and higher levels of complexity, we hope. This approach—lots of hidden layers for lots of complexity—is known as deep learning.
26%
Flag icon
Notice that I’ve shown an extra part of the cluckerfluffer cell here, called the activation function, because without it, the cell will punish any sandwich that contains chicken or marshmallow. With a threshold of 15, the activation function stops the cell from turning on when just chicken (10 points) or marshmallow (10 points) is present
32%
Flag icon
To understand the forest, let’s start with the trees. A random forest algorithm is made of individual units called decision trees. A decision tree is basically a flowchart that leads to an outcome based on the information we have. And, pleasingly, decision trees do kind of look like upside-down trees.
36%
Flag icon
(Fortunately for us, in real life, “kill all humans” is usually very impractical. Don’t give autonomous algorithms deadly weapons is the message here.)
42%
Flag icon
But sometimes, crowdsourcing doesn’t work as well, and for that I blame humans.
44%
Flag icon
When the researchers discovered their algorithm was having trouble labeling pedestrians, cars, and other obstacles, they went back to look at their input data and discovered that most of the errors could be traced back to labeling errors that humans had made in the training dataset.4
47%
Flag icon
Among the community of AI researchers and enthusiasts, AI has a reputation for seeing giraffes everywhere. Given a random photo of an uninteresting bit of landscape—a pond, for example, or some trees—AI will tend to report the presence of giraffes. The effect is so common that internet security expert Melissa Elliott suggested the term giraffing for the phenomenon of AI overreporting relatively rare sights.7 The reason for this has to do with the data the AI is trained on. Though giraffes are uncommon, people are much more likely to photograph a giraffe (“Hey, cool, a giraffe!”) than a random ...more
50%
Flag icon
One bias they set out to avoid was visual priming—that is, humans asking questions about an image tend to ask questions to which the answer is yes. Humans very rarely ask “Do you see a tiger?” about an image in which there are no tigers. As a result, an AI trained on that data would learn that the answer to most questions is yes.
51%
Flag icon
Visual Chatbot learned to answer “I can’t tell; it’s in black and white,” even if the picture was very obviously not in black and white. It will answer “I can’t tell; I can’t see her feet” to questions like “What color is her hat?” It gives plausible excuses for confusion but in completely the wrong context. One thing it doesn’t usually do, however, is express general confusion—because the humans it learned from weren’t confused. Show it a picture of BB-8, the ball-shaped robot from Star Wars, and Visual Chatbot will declare that it is a dog and begin answering questions about it as if it were ...more
52%
Flag icon
“I’ve taken to imagining [AI] as a demon that’s deliberately misinterpreting your reward and actively searching for the laziest possible local optima. It’s a bit ridiculous, but I’ve found it’s actually a productive mindset to have,” writes Alex Irpan, an AI researcher at Google.5
54%
Flag icon
That’s why you’ll get algorithms like the navigation app that, during the California wildfires of December 2017, directed cars toward neighborhoods that were on fire. It wasn’t trying to kill people: it just saw that those neighborhoods had less traffic. Nobody had told it about fire.
57%
Flag icon
A curiosity-driven AI makes observations about the world, then makes predictions about the future. If the thing that happens next is not what it predicted, it counts that as a reward. As it learns to predict better, it has to seek out new situations in which it doesn’t yet know how to predict the outcome.
59%
Flag icon
One problem is that platforms like YouTube, as well as Facebook and Twitter, derive their income from clicks and viewing time, not from user enjoyment. So an AI that sucks people into addictive conspiracy-theory vortexes may be optimizing correctly, at least as far as its corporation is concerned. Without some form of moral oversight, corporations can sometimes act like AIs with faulty reward functions.
61%
Flag icon
Sometimes I think the surest sign that we’re not living in a simulation is that if we were, some organism would have learned to exploit its glitches.
61%
Flag icon
Another program went even further, reaching into the very fabric of the Matrix. Tasked with solving a math problem, it instead found where all the solutions were kept, picked the best ones, and edited itself into the authorship slots, claiming credit for them.13 Another AI’s hack was even simpler and more devastating: it found where the correct answers were stored and deleted them. Thus it got a perfect score.14 Recall, too, the
65%
Flag icon
The AI that thought humans were rating Mexican restaurants badly had probably learned from internet articles and posts that associated the word Mexican with words like illegal.
65%
Flag icon
data the COMPAS algorithm learned from is the result of hundreds of years of systematic racial bias in the US justice system. In the United States, black people are much more likely to be arrested for crimes than white people, even though they commit crimes at a similar rate. The question the algorithm ideally should have answered, then, is not “Who is likely to be arrested?” but “Who is most likely to commit a crime?” Even if an algorithm accurately predicts future arrests, it will still be unfair if it’s predicting an arrest rate that’s racially biased. How did it even manage to label black ...more
66%
Flag icon
company’s recruitment algorithm wanting to discover which features the algorithm was most strongly correlating with good performance. Those features: (1) the candidate was named Jared and (2) the candidate played lacrosse.21
66%
Flag icon
Their job was made even harder by the fact that the algorithm was also learning to favor words that are most commonly included on male resumes, words like executed and captured. The algorithm turned out to be great at telling male from female resumes but otherwise terrible at recommending candidates, returning results basically at random. Finally, Amazon scrapped the project.
67%
Flag icon
People treat these kinds of algorithms as if they are making recommendations, but it’s a lot more accurate to say that they’re making predictions. They’re not telling us what the best decision would be—they’re just learning to predict human behavior. Since humans tend to be biased, the algorithms that learn from them will also tend to be biased unless humans take extra care to find and remove the bias. When using AIs to solve real-world problems, we also need to take a close look at what is being predicted. There’s a kind of algorithm called predictive policing, which looks at past police ...more
This highlight has been truncated due to consecutive passage length restrictions.
68%
Flag icon
Once we detect bias, what can we do about it? One way of removing bias from an algorithm is to edit the training data until the training data no longer shows the bias we’re concerned about.27 We might be changing some loan applications from the “rejected” to the “accepted” category, for example, or we might selectively leave some applications out of our training data altogether. This is known as preprocessing. The key to all this may be human oversight. Because AIs are so prone to unknowingly solving the wrong problem, breaking things, or taking unfortunate shortcuts, we need people to make ...more
69%
Flag icon
Some neuroscientists believe that dreaming is a way of using our internal models for low-stakes training.
74%
Flag icon
The problem is that there are just a few image datasets in the world that are both free to use and large enough to be useful for training image recognition algorithms, and many companies and research groups use them. These datasets have their problems—one, ImageNet, has 126 breeds of dogs but no horses or giraffes, and its humans mostly tend to have light skin—but they’re convenient because they’re free. Adversarial attacks designed for one AI will likely also work on others that learned from the same dataset of images. The training data seems to be the important thing, not the details of the ...more
74%
Flag icon
It’s not entirely clear why the training data matters so much more to the algorithm’s success than the algorithm’s design. And it’s a bit worrying, since it means that the algorithms may in fact be recognizing weird quirks of their datasets rather than learning to recognize objects in all kinds of situations and lighting conditions. In other words, overfitting might still be a far more widespread problem in image recognition algorithms than we’d like to believe.
75%
Flag icon
The artist Tom White has used this effect to create a new kind of abstract art. He gives one AI a palette of abstract blobs and color washes and tells it to draw something (a jack-o’-lantern, for example) that another AI can identify.12 The resulting drawings look only vaguely like the things they’re supposed to be—a “measuring cup” is a squat green blob covered in horizontal scribbles, and a “cello” looks more like a human heart than a musical instrument. But to ImageNet-trained algorithms, the pictures are uncannily accurate. In a way, this artwork is a form of adversarial attack.
75%
Flag icon
subtle ways to make it past the AI. The Guardian reports: “One HR employee for a major technology company recommends slipping the words ‘Oxford’ or ‘Cambridge’ into a CV in invisible white text, to pass the automated screening.
76%
Flag icon
An example of an adversarial attack that’s targeted at humans with touch screens: some advertisers have put fake specks of “dust” on their banner ads, hoping that humans will accidentally click on the ads while trying to brush them off.16
76%
Flag icon
Why are AIs so oblivious to these monstrosities? Sometimes it’s because they don’t have a way to express them. Some AIs can only answer by outputting a category name—like “sheep”—and aren’t given an option for expressing that yes, it is a sheep, but something is very, very wrong. But there may often be another reason. It turns out that image recognition algorithms are very good at identifying scrambled images. If you chop an image of a flamingo into pieces and rearrange the pieces, a human will no longer be able to tell that it’s a flamingo. But an AI may still have no trouble seeing the bird. ...more
76%
Flag icon
Basically, if you’re in a horror movie where zombies start appearing, you might want to grab the controls from your self-driving car.
79%
Flag icon
chatbots that pass as human usually use some gimmick—such as, in one specific case, pretending to be an eleven-year-old Ukrainian kid with limited English skills13—to explain away non sequiturs or their inability to handle most topics.
79%
Flag icon
Did the AI have a set of examples to copy or a fitness function to maximize? If not, then you’re probably not looking at the product of an AI.
79%
Flag icon
AI-written stories will meander, forgetting to resolve earlier plot points, sometimes even forgetting to finish sentences. AIs
79%
Flag icon
An AI that’s making callbacks to earlier jokes, that sticks with a consistent cast of characters, and that keeps track of the objects in a room probably had a lot of human editing help, at least.
80%
Flag icon
As CNBC reported in 2018, people are already being advised to overemote for the AIs that screen videos of job candidates or to wear makeup that makes their faces easier to read.17
80%
Flag icon
A blank mind that absorbs information like a sponge only exists in science fiction. For real AIs, a human has to choose the form to match the problem it’s supposed to solve. Are
81%
Flag icon
In that sense, practical machine learning ends up being a bit of a hybrid between rules-based programming, in which a human tells a computer step-by-step how to solve a problem, and open-ended machine learning, in which an algorithm has to figure everything out.
81%
Flag icon
Rather than just label a picture as depicting a dog, the researchers asked humans to click on the part of the image that actually contained the dog, then they programmed the AI to pay special attention to that part. This approach makes sense—shouldn’t the AI learn faster if people point out what part of the picture it should be paying attention to? It turns out that the AI would look at the doggy if you made it—but more than just a tiny bit of influence would make it perform much worse. Even more confoundingly, researchers don’t know exactly why.
81%
Flag icon
I’ve seen similar quirks with Visual Chatbot, the giraffe-happy chatbot we met in chapter 4. It has a tendency to identify handheld objects (lightsabers, guns, swords) as Wii remotes. That might be a reasonable guess if it were still 2006, when Wii was in its heyday. More than a decade later, however, finding a person holding a Wii remote is becoming increasingly unlikely.
82%
Flag icon
Maybe there’s a rare but catastrophic bug that develops, like the one that affected Siri for a brief period of time, causing her to respond to users saying “Call me an ambulance” with “Okay, I’ll call you ‘an ambulance’ from now on.”
« Prev 1