Explainable AI for Practitioners: Designing and Implementing Explainable ML Solutions

Most intermediate-level machine learning books focus on how to optimize models by increasing accuracy or decreasing prediction error. But this approach often overlooks the importance of understanding why and how your ML model makes the predictions that it does.

Explainability methods provide an essential toolkit for better understanding model behavior, and this practical guide brings together best-in-class techniques for model explainability. Experienced machine learning engineers and data scientists will learn hands-on how these techniques work so that you'll be able to apply these tools more easily in your daily workflow.

This essential book provides:

  • A detailed look at some of the most useful and commonly used explainability techniques, highlighting pros and cons to help you choose the best tool for your needs
  • Tips and best practices for implementing these techniques
  • A guide to interacting with explainability and how to avoid common pitfalls
  • The knowledge you need to incorporate explainability in your ML workflow to help build more robust ML systems
  • Advice about explainable AI techniques, including how to apply techniques to models that consume tabular, image, or text data
  • Example implementation code in Python using well-known explainability libraries for models built in Keras and TensorFlow 2.0, PyTorch, and HuggingFace
1141565112
Explainable AI for Practitioners: Designing and Implementing Explainable ML Solutions

Most intermediate-level machine learning books focus on how to optimize models by increasing accuracy or decreasing prediction error. But this approach often overlooks the importance of understanding why and how your ML model makes the predictions that it does.

Explainability methods provide an essential toolkit for better understanding model behavior, and this practical guide brings together best-in-class techniques for model explainability. Experienced machine learning engineers and data scientists will learn hands-on how these techniques work so that you'll be able to apply these tools more easily in your daily workflow.

This essential book provides:

  • A detailed look at some of the most useful and commonly used explainability techniques, highlighting pros and cons to help you choose the best tool for your needs
  • Tips and best practices for implementing these techniques
  • A guide to interacting with explainability and how to avoid common pitfalls
  • The knowledge you need to incorporate explainability in your ML workflow to help build more robust ML systems
  • Advice about explainable AI techniques, including how to apply techniques to models that consume tabular, image, or text data
  • Example implementation code in Python using well-known explainability libraries for models built in Keras and TensorFlow 2.0, PyTorch, and HuggingFace
67.99 In Stock
Explainable AI for Practitioners: Designing and Implementing Explainable ML Solutions

Explainable AI for Practitioners: Designing and Implementing Explainable ML Solutions

Explainable AI for Practitioners: Designing and Implementing Explainable ML Solutions

Explainable AI for Practitioners: Designing and Implementing Explainable ML Solutions

eBook

$67.99 

Available on Compatible NOOK devices, the free NOOK App and in My Digital Library.
WANT A NOOK?  Explore Now

Related collections and offers


Overview

Most intermediate-level machine learning books focus on how to optimize models by increasing accuracy or decreasing prediction error. But this approach often overlooks the importance of understanding why and how your ML model makes the predictions that it does.

Explainability methods provide an essential toolkit for better understanding model behavior, and this practical guide brings together best-in-class techniques for model explainability. Experienced machine learning engineers and data scientists will learn hands-on how these techniques work so that you'll be able to apply these tools more easily in your daily workflow.

This essential book provides:

  • A detailed look at some of the most useful and commonly used explainability techniques, highlighting pros and cons to help you choose the best tool for your needs
  • Tips and best practices for implementing these techniques
  • A guide to interacting with explainability and how to avoid common pitfalls
  • The knowledge you need to incorporate explainability in your ML workflow to help build more robust ML systems
  • Advice about explainable AI techniques, including how to apply techniques to models that consume tabular, image, or text data
  • Example implementation code in Python using well-known explainability libraries for models built in Keras and TensorFlow 2.0, PyTorch, and HuggingFace

Product Details

ISBN-13: 9781098119096
Publisher: O'Reilly Media, Incorporated
Publication date: 10/31/2022
Sold by: Barnes & Noble
Format: eBook
Pages: 278
File size: 14 MB
Note: This product may take a few minutes to download.

About the Author

Michael Munn is a research software engineer at Google. His work focuses on better understanding the mathematical foundations of machine learning and how those insights can be used to improve machine learning models at Google. Previously, he worked in the Google Cloud Advanced Solutions Lab helping customers design, implement, and deploy machine learning models at scale. Michael has a PhD in mathematics from the City University of New York. Before joining Google, he worked as a research professor.


David Pitman is a staff engineer working in Google Cloud on the AI Platform, where he leads the Explainable AI team. He's also a co-organizer of PuPPy, the largest Python group in the Pacific Northwest. David has a Masters of Engineering degree and a BS in computer science from MIT, where he previously served as a research scientist.

From the B&N Reads Blog

Customer Reviews