On Efficiently Explaining GraphBased Classifiers
Abstract
Recent work has shown that not only decision trees (DTs) may not be interpretable but also proposed a polynomialtime algorithm for computing one PIexplanation of a DT. This paper shows that for a wide range of classifiers, globally referred to as decision graphs, and which include decision trees and binary decision diagrams, but also their multivalued variants, there exist polynomialtime algorithms for computing one PIexplanation. In addition, the paper also proposes a polynomialtime algorithm for computing one contrastive explanation. These novel algorithms build on explanation graphs (XpG's). XpG's denote a graph representation that enables both theoretical and practically efficient computation of explanations for decision graphs. Furthermore, the paper proposes a practically efficient solution for the enumeration of explanations, and studies the complexity of deciding whether a given feature is included in some explanation. For the concrete case of decision trees, the paper shows that the set of all contrastive explanations can be enumerated in polynomial time. Finally, the experimental results validate the practical applicability of the algorithms proposed in the paper on a wide range of publicly available benchmarks.
 Publication:

arXiv eprints
 Pub Date:
 June 2021
 arXiv:
 arXiv:2106.01350
 Bibcode:
 2021arXiv210601350H
 Keywords:

 Computer Science  Artificial Intelligence;
 Computer Science  Machine Learning