A robust and engaging account of the single greatest threat faced by AI and ML systems In Not With A Bug, But With A Attacks on Machine Learning Systems and What To Do About Them , a team of distinguished adversarial machine learning researchers deliver a riveting account of the most significant risk to currently deployed artificial intelligence cybersecurity threats. The authors take you on a sweeping tour – from inside secretive government organizations to academic workshops at ski chalets to Google’s cafeteria – recounting how major AI systems remain vulnerable to the exploits of bad actors of all stripes. Based on hundreds of interviews of academic researchers, policy makers, business leaders and national security experts, the authors compile the complex science of attacking AI systems with color and flourish and provide a front row seat to those who championed this change. Grounded in real world examples of previous attacks, you will learn how adversaries can upend the reliability of otherwise robust AI systems with straightforward exploits. The steeplechase to solve this problem has already Nations and organizations are aware that securing AI systems brings forth an indomitable the prize is not just to keep AI systems safe but also the ability to disrupt the competition’s AI systems. An essential and eye-opening resource for machine learning and software engineers, policy makers and business leaders involved with artificial intelligence, and academics studying topics including cybersecurity and computer science, Not With A Bug, But With A Sticker is a warning―albeit an entertaining and engaging one―we should all heed. How we secure our AI systems will define the next decade. The stakes have never been higher, and public attention and debate on the issue has never been scarcer. The authors are donating the proceeds from this book to two Black in AI and Bountiful Children’s Foundation.
Very informative without being padded with boring filler. A.I. is the biggest threat to the future. The world needs to start establishing a series of strong guardrails before A.I. gets more out of control than it already has.
Puiki knyga, labai informatyvi, jokios nereikalingos beletristikos. Sakyčiau, ne visai pradedantiems domėtis AI, labiau jau šį bei tą apie AI žinančiai publikai, bet turbūt nebūtinai, tik gal kiek sunkiau skaitytųsi.
This is a great primer on threats against machine learning for a tech literate crowd. Especially as a penetration tester, I got a solid basis of some of the original threats against ML systems, though I’ll admit that a lot of it was focused on image-based systems. The best resource provided is in Appendix A, which features 5 questions to ask ML teams to assess their security posture. As an aside, a fantastic quote from the book that reflects my experience in offensive security - “Defenders know what system they tried to set up, but attackers know what system was actually set up”
The book is interesting, but I expected it to detail technical aspects a bit more.
Also, when reading a sentence like "robust for a Black male face, but not for a white male", I am always puzzled how we end up with such absurd capitalization rules (or maybe it is because I am not from the US?)
The book explains well how ML systems can be attacked and to what extent we have defenses against those attacks. In short, it's hard and ML systems are far less robust than traditional software.
The book does a great job of describing the technical details in a concise and intuitive way, leaving plenty of time to discuss the juicy stories of the ML Security space. There are lots of interesting examples of attacks as well as stories that show what these attacks mean for businesses, governments, and individuals. This makes the book surprisingly suitable for both those with absolutely no background in the space as well as for people who are fairly familiar with the topic.
This is a really good primer about the security flaws with Machine Learning (ML) and AI. The writers clearly have a lot of expertise in that area. And they have written this book in a language that even a layman would understand. As someone working in cybersecurity, I found the sections on research about attacks on ML (called “adversarial Machine Learning”) pretty fascinating. I do wish there were references to the research papers that the authors mentioned so that someone who wants to read more has some pointers. However, the author does mention the names of all the researchers and what they worked on so it’s not terribly difficult to find the research papers through Google Scholar. If you are interested in the history of machine learning and AI, and their drawbacks, I’d certainly recommend to read this book.
This is a well written description of the current state of machine learning security. This is a great introduction for anyone interested in the field as it provides some good history and examples behind the different types of attacks.