Black-box machine learning (ML) models have become increasingly popular in practice. They can offer great performance, especially in computer vision (CV) and natural language processing (NLP) applications, but they have been criticized for their lack of transparency. This has led to a plethora of post-hoc explainability techniques aimed at helping humans understand the predictions from black-box machine learning models. A natural way to explain a decision or a concept is to use examples. In this work, we present an ML practitioner’s perspective on the use of sample-based explainability techniques, i.e., methods which aim to ``explain by example’’. We discuss an industry case study, share some practical insights, and highlight opportunities for future work.