Using different XAI baselines to answer different science questions
Abstract
Methods of eXplainable Artificial Intelligence (XAI) are used in geoscientific applications to gain insights into the decision-making strategy of Neural Networks (NNs), determining which features in the input contribute the most to a NN prediction. In this work, we aim to highlight that the task of attributing a prediction to the input does not have a single solution. Instead, the attribution depends greatly on the considered baseline, a fact that has been overlooked in the literature. So far, the baseline is either chosen by the user (yet, it is rarely stated or elaborated upon) or set a priori by the algorithm of the XAI method (with sometimes the user being ignorant of that choice). We argue that the dependence of the attribution on the baseline can be beneficial, as different baselines can be used to gain insights into different science questions. To illustrate the above, we use the CESM2 LE dataset (a large ensemble of historical and future climate simulations forced with the SSP3-7.0 scenario) and train a fully connected NN to predict the ensemble- and global-mean temperature (i.e. the forced global warming signal) given an annual temperature map from an individual ensemble member. We then use various XAI methods and different baselines to attribute the network predictions to the input. We show that attributions differ substantially when considering different baselines, as they correspond to answering different science questions. We conclude by discussing some important implications and considerations about the use of baselines in XAI research.
- Publication:
-
AGU Fall Meeting Abstracts
- Pub Date:
- December 2022
- Bibcode:
- 2022AGUFM.H25A..07M