Opening the "Black Box": Tools to Improve Understanding of Neural Network Reasoning for Geoscience Applications
Abstract
Artificial neural networks (ANNs) are emerging in many geoscience applications for a large variety of tasks, including prediction, classification, anomaly detection, and potentially representing subgrid processes in climate or weather models. We highlight methods developed within the field of explainable AI (XAI) for the interpretation of ANN models. This effort extends recent work (McGovern et al., 2019) on interpretation of ML methods for geoscience applications. However, we focus on a different set of methods, namely layer-wise relevance propagation (LRP), which we believe is particularly useful for geoscience applications. LRP methods (Bach et al., 2015), such as Deep Taylor decomposition, seek to explain decision making of ANNs by identifying which elements of the input data are most important for the model to lead to the corresponding output. Understanding those details is important to be able to a) investigate whether an ANN model uses a proper model representation, rather than exploiting artifacts; b) aid in targeted optimization and debugging of an ANN model; and c) potentially learn new science from an ANN model, e.g., by discovering new relevant properties of the input data. We provide a basic introduction to LRP methods and demonstrate their usefulness for atmospheric science applications.
- Publication:
-
AGU Fall Meeting Abstracts
- Pub Date:
- December 2019
- Bibcode:
- 2019AGUFM.A51U2666E
- Keywords:
-
- 0365 Troposphere: composition and chemistry;
- ATMOSPHERIC COMPOSITION AND STRUCTURE;
- 3336 Numerical approximations and analyses;
- ATMOSPHERIC PROCESSES;
- 0520 Data analysis: algorithms and implementation;
- COMPUTATIONAL GEOPHYSICS;
- 0555 Neural networks;
- fuzzy logic;
- machine learning;
- COMPUTATIONAL GEOPHYSICS