AI systems are uniquely vulnerable—machine learning (ML) systems in particular. ML is a subfield of AI, but has come to dominate practical AI systems. In ML systems, blank “models” are fed an enormous amount of data and given instructions to figure solutions out for themselves. Some ML attacks involve stealing the “training data” used to teach the ML system, or stealing the ML model upon which the system is based. Others involve configuring the ML system to make bad—or wrong—decisions.

