Explaining by Example: A Practitioner’s Perspective

Abstract

Black-box machine learning (ML) models have become increasingly popular in practice. They can offer great performance, especially in computer vision (CV) and natural language processing (NLP) applications, but they have been criticized for their lack of transparency. This has led to a plethora of post-hoc explainability techniques aimed at helping humans understand the predictions from black-box machine learning models. A natural way to explain a decision or a concept is to use examples. In this work, we present an ML practitioner’s perspective on the use of sample-based explainability techniques, i.e., methods which aim to ``explain by example’’. We discuss an industry case study, share some practical insights, and highlight opportunities for future work.

Publication
Montreal AI Symposium (MAIS)
Marc-Etienne Brunet
Marc-Etienne Brunet
Applied Research Scientist

Applied Research Scientist at AI Trust and Governance Lab located at Toronto, ON, Canada.

Masoud Hashemi
Masoud Hashemi
Applied Research Scientist

Applied Research Scientist at AI Trust and Governance Lab located at Toronto, ON, Canada.