More on this book
Community
Kindle Notes & Highlights
Read between
December 29 - December 31, 2021
One example is the now infamous Microsoft Tay chatbot, a machine learning–based Twitter bot that was designed to learn from the users who tweeted at it. The bot was short-lived. “Unfortunately, within the first 24 hours of coming online,” Microsoft told the Washington Post, “we became aware of a coordinated effort by some users to abuse Tay’s commenting skills to have Tay respond in inappropriate ways. As a result, we have taken Tay offline and are making adjustments.”6 It had taken almost no time at all for users to teach Tay to spew hate speech and other abuse. Tay had no built-in sense of
...more
The problem with humans is that if search-engine autocomplete makes a really hilarious mistake, humans will tend to click on it, which just makes the AI even more likely to suggest it to the next human. This famously happened in 2009 with the phrase “Why won’t my parakeet eat my diarrhea?”7 Humans found this suggested question so hilarious that soon the AI was suggesting it as soon as people began typing “Why won’t.” Probably a human at Google had to manually intervene to stop the AI from suggesting that phrase.
2018 paper showed that two machine learning algorithms in a situation like the book-pricing setup above, each given the task of setting a price that maximizes profits, can learn to collude with each other in a way that’s both highly sophisticated and highly illegal. They can do this without explicitly being taught to collude and without communicating directly with each other—somehow, they manage to set up a price-fixing scheme just by observing each other’s prices. This has only been demonstrated in a simulation so far, not in a real-world pricing scenario. But people have estimated that a
...more
A project called Quicksilver automatically creates draft Wikipedia articles about female scientists (who have been noticeably underrepresented on Wikipedia), saving volunteer editors time. People who need to write audio transcripts or translate text use the (admittedly buggy) machine learning versions as a starting point for their own translations. Musicians can employ music-generating algorithms, using them to put together a piece of original music to exactly fit a commercial slot for which the music doesn’t have to be exceptional, just inexpensive. In many cases, the human role is to be an
...more
Even crime itself may be more easily committed by a robot than a human. In 2016, Harvard student Serena Booth built a robot that was meant to test some theories about whether humans trust robots too much.14 Booth built a simple remote-controlled robot and had it drive up to students, asking to be allowed access to a key card–controlled dorm. Under those circumstances, only 19 percent of people let it into the dorm (interestingly, that number was a bit higher when the students were in groups). However, if the same robot said it was delivering cookies, 76 percent let it in.
When I train neural nets to generate text, only a tiny fraction—a tenth or a hundredth—of the results are worth showing. I’m always curating the results to present a story or some interesting point about the algorithm or the dataset.
Given a problem to solve, and enough freedom in how to solve it, AIs can come up with solutions that their programmers never dreamed existed. Tasked with walking from point A to point B, an AI may decide instead to assemble itself into a tower and fall over. It may decide to travel by spinning in tight circles or twitching along the floor in a writhing heap. If we train it in simulation, it may hack into the very fabric of its universe, figuring out ways to exploit physics glitches to attain superhuman abilities. It will take instructions literally: when told to avoid collisions, it will
...more
we want. It will still try to do what we want. But there will always be a potential disconnect between what we want AI to do and what we tell it to do. Will it get smart enough to understand us and our world as another human does—or even to surpass us? Probably not in our lifetimes. For the foreseeable future, the danger will not be that AI is too smart but that it’s not smart enough. On the surface, AI will seem to understand more. It will be able to generate photorealistic scenes, maybe paint entire movie scenes with lush textures, maybe beat every computer game we can throw at it. But
...more