Explaining Explanations to Society
Abstract
There is a disconnect between explanatory artificial intelligence (XAI) methods and the types of explanations that are useful for and demanded by society (policy makers, government officials, etc.) Questions that experts in artificial intelligence (AI) ask opaque systems provide inside explanations, focused on debugging, reliability, and validation. These are different from those that society will ask of these systems to build trust and confidence in their decisions. Although explanatory AI systems can answer many questions that experts desire, they often don't explain why they made decisions in a way that is precise (true to the model) and understandable to humans. These outside explanations can be used to build trust, comply with regulatory and policy changes, and act as external validation. In this paper, we focus on XAI methods for deep neural networks (DNNs) because of DNNs' use in decision-making and inherent opacity. We explore the types of questions that explanatory DNN systems can answer and discuss challenges in building explanatory systems that provide outside explanations for societal requirements and benefit.
- Publication:
-
arXiv e-prints
- Pub Date:
- January 2019
- DOI:
- 10.48550/arXiv.1901.06560
- arXiv:
- arXiv:1901.06560
- Bibcode:
- 2019arXiv190106560G
- Keywords:
-
- Computer Science - Artificial Intelligence
- E-Print:
- NeurIPS 2018 Workshop on Ethical, Social and Governance Issues in AI