More on this book
Community
Kindle Notes & Highlights
Read between
June 4 - June 19, 2023
The Five Principles of AI Weirdness: • The danger of AI is not that it’s too smart but that it’s not smart enough. • AI has the approximate brainpower of a worm. • AI does not really understand the problem you want it to solve. • But: AI will do exactly what you tell it to. Or at least it will try its best. • And AI will take the path of least resistance.
Programming an AI is almost more like teaching a child than programming a computer.
All this progress happens in just a few minutes. By the time I return with my coffee, the AI has already discovered that starting with “Knock Knock / Who’s There?” fits the existing knock-knock jokes really, really well.
AI is also great at strategy games like chess, for which we know how to describe all possible moves but not how to write a formula that tells us what the best next move is.
worrying about an AI takeover is like worrying about overcrowding on Mars.
maybe we filmed our successful candidates using a single camera, and the AI learns to read the camera metadata and select only candidates who were filmed with that camera.
Heliograf, developed by the Washington Post to turn sports stats into news articles.
years after it introduced M, Facebook found that its algorithm still needed too much human help. It shut down the service in January 2018.
Dealing with the full range of things a human can say or ask is a very broad task. The mental capacity of AI is still tiny compared to that of humans, and as tasks become broad, AIs begin to struggle.
One AI learned to play the game Karate Kid, but it always squandered all its powerful Crane Kick moves at the beginning of the game. Why? It only had enough memory to look forward to the next six seconds of game play.
Heliograf, the journalism algorithm that translates individual lines of a spreadsheet into sentences in a formulaic sports story, works because it can write each sentence more or less independently. It doesn’t need to remember the entire article at once.
As of 2019, only some AIs are starting to be able to keep track of long-term information in a story—and even then, they’ll tend to lose track of some bits of crucial information.
Matching the surface qualities of human speech while lacking any deeper meaning is a hallmark of neural-net-generated text.
Remember that only a handful of every thousand sandwiches from the sandwich hole are delicious. Rather than go through all the trouble of figuring out how to weight each ingredient, or how to use them in combination, the neural net may realize it can achieve 99.9 percent accuracy by rating each sandwich as terrible, no matter what.
Sometimes we can guess what a cell’s function will be, but far more frequently, we have no idea what it’s doing.
we’d love to be able to tell when they’re making unfortunate mistakes and to learn from their strategies.
A group at MIT found that it could deactivate cells to remove elements from generated images. Interestingly, elements that the neural net deemed “essential” were more difficult to remove than others—for example, it was easier to remove curtains from an image of a conference room than to remove the tables and chairs.
One of these strategies takes its inspiration from the process of evolution. It makes a lot of sense to imitate evolution—after all, what is evolution if not a generational process of “guess and check”?
The simulation itself is a really hard problem, so let’s just say we’ve solved it already. (Note: in actual machine learning, it’s never this easy.)
Unfortunately, by a stroke of bad luck, it just so happens that the solution it found was “murder everyone.” Technically that solution works because all we told it to do was minimize the number of people entering the left hallway.
Car bumpers that dissipate force when they crumple, proteins that bind to other medically useful proteins, flywheels that spin just so—these are all problems that people have used evolutionary algorithms to solve.
When we consider the huge array of life that has arisen on our planet via evolution, we get an idea of the magnitude of possibility that’s available to us by using virtual evolution at a massively accelerated speed.
Choosing the correct form for a machine learning algorithm, or breaking a problem into tasks for subalgorithms, is a key way programmers can design for success.
it’s not enough just to have lots and lots of data. If there are problems with the dataset, the algorithm will at best waste time and at worst learn the wrong thing.
The problem was that, unlike a human, BigGAN had no way of distinguishing an object’s surroundings from the object itself
It had memorized the information in such a way that it could be recovered by any user—even without access to the original dataset. This problem is known as unintentional memorization and can be prevented with appropriate security measures—or by keeping sensitive data out of a neural network’s training dataset in the first place.
Text-generating RNNs create non sequiturs because their world essentially is a non sequitur.
AIs don’t understand nearly enough about their tasks to be able to consider context or ethics or basic biology.
“I’ve taken to imagining [AI] as a demon that’s deliberately misinterpreting your reward and actively searching for the laziest possible local optima.
In fact, “pause the game so a bad thing won’t happen,” “stay at the very beginning of the level, where it’s safe,” or even “die at the end of level 1 so level 2 doesn’t kill you” are all strategies that machine learning algorithms will use if you let them.
It turned out that YouTube’s algorithm was increasingly suggesting disturbing videos, conspiracy theories, and bigotry.
In fact, the ideal YouTube users, as far as the AI is concerned, are the ones who have been sucked into a vortex of YouTube conspiracy videos and now spend their entire lives on YouTube.
AIs don’t have any obligation to obey laws of physics that you didn’t tell them about.
In fact, simulated organisms are very, very good at evolving to find and exploit energy sources in their world. In that way, they’re a lot like biological organisms,
Sometimes I think the surest sign that we’re not living in a simulation is that if we were, some organism would have learned to exploit its glitches.
if data comes from humans, it will likely have bias in it.
The algorithm turned out to be great at telling male from female resumes but otherwise terrible at recommending candidates,
They’re not telling us what the best decision would be—they’re just learning to predict human behavior.
Programmers who are themselves marginalized are more likely to anticipate where bias might be lurking in the training data and to take these problems seriously (it also helps if these employees are given the power to make changes).
Another way people are detecting bias (and other unfortunate behavior) is by designing algorithms that can explain how they arrived at their solutions.
Without being specifically told to do so, the artificial neural network arrived at some of the same visual processing tricks that animals use.
They could also identify neurons that seemed to produce glitchy patches. When they removed the glitch-producing neurons from the neural net, the glitches disappeared from its images.
This quirk of neural networks is known as catastrophic forgetting.7 A typical neural network has no way of protecting its long-term memory.
Without a way to see what AIs are thinking, or to ask them how they came to their conclusions (people are working on this),
the AI is only looking for the features, not how they’re connected. In other words, the AI is acting like a bag-of-features model. Even AIs that theoretically are capable of looking at large shapes, not just tiny features, seem to often act like simple bag-of-features models.
Dealing with all the world’s weirdness is a task that’s beyond today’s AI. For that, you’ll need a human.
AIs can perform at the level of a human only in very narrow, controlled situations.
practical machine learning ends up being a bit of a hybrid between rules-based programming, in which a human tells a computer step-by-step how to solve a problem, and open-ended machine learning, in which an algorithm has to figure everything out.
sometimes (perhaps even ideally) the programmer researches the problem and discovers that they now understand it so well that they no longer need to use machine learning at all.
For the foreseeable future, the danger will not be that AI is too smart but that it’s not smart enough.