Generate different forms of machine learning model explanations to gain insight into the logic of models Learn how to measure bias in machine learning models As we incorporate the next wave of AI-enabled products in high-stakes decisions, we need some level of assurance of the safety that we have come to expect from everyday products. Continuing the progress of using AI in high-stakes decisions requires trusting AI-enabled solutions to deliver their promised benefits while protecting the public from harm. Questions about the security, safety, privacy, and fairness of AI-enabled decisions need to be answered as a condition for deploying AI solutions at scale. This book is a guide that will introduce you to key concepts, use cases, tools, and techniques of the emerging field of Responsible AI. We will cover hands-on coding techniques to identify and measure bias. Measuring bias is not we also need to explain and fix our models. This book outlines how to do this throughout the machine learning pipeline. By the end of this book, you will have mastered Python coding techniques of explaining machine learning models’ logic, measuring their fairness at the individual and group levels and monitor them in production environments to detect degradation in their accuracy or fairness. Data Scientists, Machine Learning Developers, and Data Science professionals who want to ensure that their machine learning model predictions are non- biased and accurate. Working knowledge of Python programming and basic concepts of machine learning model training and data validation is good to have.