Or

61%
Flag icon
Perhaps more surprising, several research groups have shown that analogous adversarial examples can be constructed to fool state-of-the-art speech-recognition systems. As one example, a group from the University of California at Berkeley designed a method by which an adversary could take any relatively short sound wave—speech, music, random noise, or any other sound—and perturb it in such a way that it sounds unchanged to humans but that a targeted deep neural network will transcribe as a very different phrase that was chosen by the adversary.28 Imagine an adversary, for example, broadcasting ...more
Artificial Intelligence: A Guide for Thinking Humans
Rate this book
Clear rating
Open Preview