You Look Like a Thing and I Love You: How Artificial Intelligence Works and Why It's Making the World a Weirder Place
Rate it:
Open Preview
10%
Flag icon
Even without understanding what bias is, AI can still manage to be biased. After all, many AIs learn by copying humans. The question they’re answering is not “What is the best solution?” but “What would the humans have done?”
24%
Flag icon
Each neuron in the human brain is much more complex than the neurons in an artificial neural network—so complex that each human neuron is more like a complete many-layered neural network all by itself. So rather than being a neural network made of eighty-six billion neurons, the human brain is a neural network made of eighty-six billion neural networks.
54%
Flag icon
And that’s why you’ll get algorithms that learn that racial and gender discrimination are handy ways to imitate the humans in their datasets. They don’t know that imitating the bias is wrong. They just know that this is a pattern that helps them achieve their goal. It’s up to the programmer to supply the ethics and the common sense.
61%
Flag icon
Sometimes I think the surest sign that we’re not living in a simulation is that if we were, some organism would have learned to exploit its glitches.
80%
Flag icon
The problem with asking AI to judge the nuances of human language and human beings is that the job is just too hard. To make matters worse, the only rules that are simple and reliable enough for it to understand may be those—like prejudice and stereotyping—that it shouldn’t be using.
80%
Flag icon
there’s one thing we’ve learned from this book, it’s that AI can’t do much without humans. Left to its own devices, at best it will flail ineffectually, and at worst it will solve the wrong problem entirely—which, as we’ve seen, can have devastating consequences.
82%
Flag icon
A changing world adds to the challenge of designing an algorithm to understand it.