Feature perturbation augmentation for reliable evaluation of importance estimators in neural networks
Abstract
Post-hoc explanation methods (such as importance estimators and saliency maps) attempt to make the inner workings of deep neural networks (DNNs) more comprehensible and trustworthy, which otherwise act as black box models. However, since a ground truth is in general lacking, local post-hoc explanation methods, which assign importance scores to input features, are challenging to evaluate. One of the most popular evaluation frameworks is to perturb features deemed important by an explanation and to measure the change in prediction accuracy. Intuitively, a large decrease in prediction accuracy would indicate that the explanation has correctly quantified the importance of features with respect to the prediction outcome (e.g., logits). However, the change in the prediction outcome may stem from perturbation artifacts, since perturbed samples in the test dataset are out of distribution (OOD) compared to the training dataset and can therefore potentially disturb the model in an unexpected manner. To overcome this challenge, we propose feature perturbation augmentation (FPA) which creates and adds perturbed images during the model training. Using three different datasets and several importance estimators, our computational experiments demonstrate that FPA makes DNNs more robust against perturbations. During evaluation, we considered model accuracy curves obtained from perturbing input features according to most important first (MIF) and least important first (LIF) orders, which are quantitatively summarized as fidelity metrics. Additionally, our results suggest that frequently observed fluctuations in the sign of importance scores describe the model characteristics rather accurately if perturbation artifacts are suppressed by FPA. Overall, FPA is an intuitive and straightforward data augmentation technique that renders the evaluation of post-hoc explanations more trustworthy.
Reproducible codes and pre-trained models with FPA are available on Github: https://github.com/lenbrocki/Feature-Perturbation-Augmentation.- Publication:
-
Pattern Recognition Letters
- Pub Date:
- December 2023
- DOI:
- 10.1016/j.patrec.2023.10.012
- arXiv:
- arXiv:2303.01538
- Bibcode:
- 2023PaReL.176..131B
- Keywords:
-
- Deep neural network;
- Artificial intelligence;
- Interpretability;
- Explainability;
- Fidelity;
- Importance estimator;
- Saliency map;
- Data augmentation;
- Feature perturbation;
- Computer Science - Machine Learning;
- Computer Science - Computer Vision and Pattern Recognition
- E-Print:
- ICLR 2023 Workshop on Trustworthy ML