Chase

73%
Flag icon
People might even be able to set up their own adversarial attacks by poisoning publicly available datasets. There are public datasets, for example, to which people can contribute samples of malware to train anti-malware AI. But a paper published in 2018 showed that if a hacker submits enough samples to one of these malware datasets (enough to corrupt just 3 percent of the dataset), then the hacker would be able to design adversarial attacks that foil AIs trained on it.
You Look Like a Thing and I Love You: How Artificial Intelligence Works and Why It's Making the World a Weirder Place
Rate this book
Clear rating
Open Preview