ServiceNow Research

Explaining explainable AI

Abstract

Financial Institutions need AI systems that are explainable in order to develop, use, and supervise these systems effectively and responsibly. However, the ever-expanding number of mathematical explainability techniques can obscure the fact that explanation is fundamentally a social interaction between an explainer and an explained (the stakeholder receiving an explanation). Therefore, in operationalizing the concept of explainable AI financial institutions should keep in mind the many different types of stakeholders - who include decision-subjects, business users, internal model checkers, external auditors and regulators - and how their needs for different types of explanations vary. This variety of needs reflects the diverse sources of demand for explanations, which range from direct or indirect legal obligations, to enabling auditing, to appropriately calibrating trust and enhancing the performance of teams using AI. In this chapter we use a human-centric and cross-disciplinary approach to provide a holistic introduction to explainable AI. We cover: the different kinds of explanations; levels of explainability; example explainability techniques; the drivers of demand for explanations; the qualities of good explanations; and the challenges financial institutions are likely to face when trying to implement explainable AI systems.

Publication
Book Chapter
Nicolas Chapados
Nicolas Chapados
VP of Research

VP of Research at AI Research Management located at Montreal, QC, Canada.