Robust Explainable AI

The area of Explainable Artificial Intelligence (XAI) is concerned with providing methods and tools to improve the interpretability of black-box learning models. While several approaches exist to generate explanations, they are often lacking robustness, e.g., they may produce completely different explanations for similar events. This phenomenon has troubling implications, as lack of robustness indicates that explanations are not capturing the underlying decision-making process of a model and thus cannot be trusted.

This book aims at introducing Robust Explainable AI, a rapidly growing field whose focus is to ensure that explanations for machine learning models adhere to the highest robustness standards. We will introduce the most important concepts, methodologies, and results in the field, with a particular focus on techniques developed for feature attribution methods and counterfactual explanations for deep neural networks.

As prerequisites, a certain familiarity with neural networks and approaches within XAI is desirable but not mandatory. The book is designed to be self-contained, and relevant concepts will be introduced when needed, together with examples to ensure a successful learning experience.

1147047894
Robust Explainable AI

The area of Explainable Artificial Intelligence (XAI) is concerned with providing methods and tools to improve the interpretability of black-box learning models. While several approaches exist to generate explanations, they are often lacking robustness, e.g., they may produce completely different explanations for similar events. This phenomenon has troubling implications, as lack of robustness indicates that explanations are not capturing the underlying decision-making process of a model and thus cannot be trusted.

This book aims at introducing Robust Explainable AI, a rapidly growing field whose focus is to ensure that explanations for machine learning models adhere to the highest robustness standards. We will introduce the most important concepts, methodologies, and results in the field, with a particular focus on techniques developed for feature attribution methods and counterfactual explanations for deep neural networks.

As prerequisites, a certain familiarity with neural networks and approaches within XAI is desirable but not mandatory. The book is designed to be self-contained, and relevant concepts will be introduced when needed, together with examples to ensure a successful learning experience.

49.99 In Stock
Robust Explainable AI

Robust Explainable AI

Robust Explainable AI

Robust Explainable AI

eBook

$49.99 

Available on Compatible NOOK devices, the free NOOK App and in My Digital Library.
WANT A NOOK?  Explore Now

Related collections and offers


Overview

The area of Explainable Artificial Intelligence (XAI) is concerned with providing methods and tools to improve the interpretability of black-box learning models. While several approaches exist to generate explanations, they are often lacking robustness, e.g., they may produce completely different explanations for similar events. This phenomenon has troubling implications, as lack of robustness indicates that explanations are not capturing the underlying decision-making process of a model and thus cannot be trusted.

This book aims at introducing Robust Explainable AI, a rapidly growing field whose focus is to ensure that explanations for machine learning models adhere to the highest robustness standards. We will introduce the most important concepts, methodologies, and results in the field, with a particular focus on techniques developed for feature attribution methods and counterfactual explanations for deep neural networks.

As prerequisites, a certain familiarity with neural networks and approaches within XAI is desirable but not mandatory. The book is designed to be self-contained, and relevant concepts will be introduced when needed, together with examples to ensure a successful learning experience.


Product Details

ISBN-13: 9783031890222
Publisher: Springer-Verlag New York, LLC
Publication date: 05/24/2025
Series: SpringerBriefs in Intelligent Systems
Sold by: Barnes & Noble
Format: eBook
File size: 5 MB

About the Author

Francesco Leofante is a researcher affiliated with the Centre for Explainable AI at Imperial College. His research focuses on explainable AI, with special emphasis on counterfactual explanations for AI-based decision-making. His recent work highlighted several vulnerabilities of counterfactual explanations and proposed innovative solutions to improve their robustness.

Matthew Wicker is an Assistant Professor (Lecturer) at Imperial College London and a Research Associate at The Alan Turing Institute. He works on formal verification of trustworthy machine learning properties with collaborators form academia and industry. His work focuses on provable guarantees for diverse notions of trustworthiness for machine learning models in order to enable responsible deployment.

Table of Contents

Foreword.- Preface.- Acknowledgements.- 1. Introduction.- 2. Explainability in Machine Learning: Preliminaries & Overview.- 3. Robustness of Counterfactual Explanations.- 4. Robustness of Saliency-Based Explanations.

From the B&N Reads Blog

Customer Reviews