As we’ve seen, there are many ways to accidentally sabotage an AI by giving it faulty or inadequate data. But there’s another kind of AI failure in which we discover that they’ve succeeded in doing what we asked, but what we asked them to do isn’t what we actually wanted them to do.