"Pretty convinced this is the best book out there on the subject " – Brian Lewis, Data Scientist at Cornerstone Research
Summary
This book covers a range of interpretability methods, from inherently interpretable models to methods that can make any model interpretable, such as SHAP, LIME and permutation feature importance. It also includes interpretation methods specific to deep neural networks, and discusses why interpretability is important in machine learning. All interpretation methods are explained in depth and discussed critically. How do they work under the hood? What are their strengths and weaknesses? How can their outputs be interpreted? "What I love about this book is that it starts with the big picture instead of diving immediately into the nitty gritty of the methods (although all of that is there, too)." – Andrea Farnham, Researcher at Swiss Tropical and Public Health Institute
Who the book is for
This book is essential for machine learning practitioners, data scientists, statisticians, and anyone interested in making their machine learning models interpretable. It will help readers select and apply the appropriate interpretation method for their specific project.
"This one has been a life saver for me to interpret models. ALE plots are just too good!" – Sai Teja Pasul, Data Scientist at Kohl's You'll learn about About the author
The author, Christoph Molnar, is an expert in machine learning and statistics, with a Ph.D. in interpretable machine learning.
The book provides a good overview for the discipline of Interpretable ML/AI, with a great balance between mathematical foundations, intuitions and applications. For readers with a background in machine learning, the chapter "Interpretable Models" offer a quick recap of the theories and summarizes the interpretation punchlines, advantages and disadvantages of the most fundamental interpretable models. The chapters of interpreting methods provide ready-to-use approaches for data scientists to explain individual predictions or global model behaviors to non-technical stakeholders. The new chapter "Neural Network Interpretation" in the latest online version provides fresh looks into how we may make sense of the black box NN architectures.
I'm so glad to have found this book for some model theory refresh and to broaden my toolbox. I'm sure that I'll keep referring back to the book for inspirations in my day to day work.