Jump to ratings and reviews
Rate this book

Hands-On Explainable AI (XAI) with Python: Interpret, visualize, explain, and integrate reliable AI for fair, secure, and trustworthy AI apps

Rate this book
Resolve the black box models in your AI applications to make them fair, trustworthy, and secure. Familiarize yourself with the basic principles and tools to deploy Explainable AI (XAI) into your apps and reporting interfaces.

Key FeaturesLearn explainable AI tools and techniques to process trustworthy AI resultsUnderstand how to detect, handle, and avoid common issues with AI ethics and biasIntegrate fair AI into popular apps and reporting tools to deliver business value using Python and associated toolsBook DescriptionEffectively translating AI insights to business stakeholders requires careful planning, design, and visualization choices. Describing the problem, the model, and the relationships among variables and their findings are often subtle, surprising, and technically complex.

Hands-On Explainable AI (XAI) with Python will see you work with specific hands-on machine learning Python projects that are strategically arranged to enhance your grasp on AI results analysis. You will be building models, interpreting results with visualizations, and integrating XAI reporting tools and different applications.

You will build XAI solutions in Python, TensorFlow 2, Google Cloud’s XAI platform, Google Colaboratory, and other frameworks to open up the black box of machine learning models. The book will introduce you to several open-source XAI tools for Python that can be used throughout the machine learning project life cycle.

You will learn how to explore machine learning model results, review key influencing variables and variable relationships, detect and handle bias and ethics issues, and integrate predictions using Python along with supporting the visualization of machine learning models into user explainable interfaces.

By the end of this AI book, you will possess an in-depth understanding of the core concepts of XAI.

What you will learnPlan for XAI through the different stages of the machine learning life cycleEstimate the strengths and weaknesses of popular open-source XAI applicationsExamine how to detect and handle bias issues in machine learning dataReview ethics considerations and tools to address common problems in machine learning dataShare XAI design and visualization best practicesIntegrate explainable AI results using Python modelsUse XAI toolkits for Python in machine learning life cycles to solve business problemsWho this book is forThis book is not an introduction to Python programming or machine learning concepts. You must have some foundational knowledge and/or experience with machine learning libraries such as scikit-learn to make the most out of this book.

Some of the potential readers of this book

Professionals who already use Python for as data science, machine learning, research, and analysisData analysts and data scientists who want an introduction into explainable AI tools and techniquesAI Project managers who must face the contractual and legal obligations of AI Explainability for the acceptance phase of their applications Table of ContentsExplaining Artificial Intelligence with PythonWhite Box XAI for AI Bias and EthicsExplaining Machine Learning with FacetsMicrosoft Azure Machine Learning Model Interpretability with SHAPBuilding an Explainable AI Solution from ScratchAI Fai

456 pages, Kindle Edition

Published July 31, 2020

8 people are currently reading
36 people want to read

About the author

Denis Rothman

18 books12 followers

Ratings & Reviews

What do you think?
Rate this book

Friends & Following

Create a free account to discover what your friends think of this book!

Community Reviews

5 stars
5 (71%)
4 stars
1 (14%)
3 stars
1 (14%)
2 stars
0 (0%)
1 star
0 (0%)
Displaying 1 - 2 of 2 reviews
Profile Image for Walter Ullon.
327 reviews164 followers
August 31, 2020
If you're looking to buy this book, then I don't need to tell you about the explosion in ML and AI-related applications across many industries over the last decade: e-commerce, streaming services, automobiles, finance, imaging, virtual assistants, etc.

Yet, for such a burgeoning field there is a dearth of resources when it comes to explainable AI. Part of the problem is that Machine Learning tools have become so refined and user-friendly that it is no longer necessary to have a good understanding of the core principles before calling an API and making predictions; just about any non-technical person could be trained to carry out a few easy steps to get low-hanging results. Thus, the demand is pretty high for introductory texts that walk the users through the many techniques for handling data, training models, and presenting predictions.

However, when it comes to unearthing the insights that lead to such predictions, the literature is sadly lacking. It used to be the case that if you needed to train an easily explainable model, you could get away with some sort of regression or decision-tree based approach. But the quantity and complexity of data in recent times have led us into the territory of "black-box models": Neural Networks and Gradient-boosted trees. While these are very powerful with very intricate architecture that can handle everything from images, text, video, sound, to even creative processes, they are not easily explainable.

More and more countries are recognizing the uses and misuses of AI tools and calling for legislation to reign in the scope and manner in which these tools are applied and rightfully questioning the process by which they were designed and if their creators have taken full account of the possible consequences. Just in the USA, SR11-7 has been written with an eye to curb model risk. In the EU, you have to look no further than the GDPR (“General Data Protection Regulation”). Among their chief concerns is the issue of built-in bias.

So, the days are coming to an end when you could easily build models and deploy them and hide behind their accuracy as long as they got you the results you wanted.

That's why I think this book is a bit of a gem: it's getting the ball rolling in training ML practitioners in not only recognizing the need to explain AI models, but more importantly giving them the tools to do so.

I wish I had a book like this two, or even one year ago when I was developing explainers for Anomaly Detection and Neural networks for the financial sector.

It is both highly accessible and authoritative in its survey of methods for extracting root-cause level intelligence of the prediction effected by the models. It is very current as well: SHAP, LIME, Google's WhatIf, and more are discussed with several illuminating examples to help the reader grasp the concepts and practice.

What is even better, is that the author does not hide behind the same datasets used over-and-over again in every ML text, such as MNIST, CIFAR, Boston Housing, Titanic, etc.. This alone, is so refreshing and it made it such a please to read. I wish more would follow his lead.

Overall, I whole-heartedly recommend this book to any ML&AI practitioner looking to understand their models and data better, and more importantly, to those looking to future-proof their organization's AI capital.
Profile Image for Thorben.
49 reviews
April 28, 2022
The book was easy to read. This is kind of positive and negative at the same time. The practical aspects where easily reproduceable but it lacked theoretical depth. Often the parameter were just set to the default value without a proper explanation what the purpose of the parameter is. If you want a quick-start to several XAI technologies go ahead but I gained no real theoretical knowledge by just reading the book.
Displaying 1 - 2 of 2 reviews

Can't find what you're looking for?

Get help and learn more about the design.