Explainable AI for Seismology: An interpretable convolutional neural network architecture for earthquake detection
Abstract
Geophysicists are increasingly adopting machine learning (ML) techniques to analyze seismic signals. Deep neural networks have emerged as particularly powerful tools for classifying waveforms and identifying seismic phases. These models can learn complex patterns and decision functions with high predictive accuracy. But the complexity and large number of parameters also make these black box models difficult to inspect or interpret. The emerging field of explainable artificial intelligence (XAI) is developing tools and techniques to enable ML solutions that can be understood by human analysts. This work explores and compares three classes of XAI methods for interpreting a one-dimensional convolutional neural network (CNN) model for earthquake phase detection. We apply popular gradient- and perturbation-based attribution maps and an interpretable local surrogate model to a pretrained CNN; for the latter we introduce a variation of LIME adapted for waveform data. Both approaches provide post hoc interpretations of a model after it has been trained, with explanations taking the form of visualizations that capture some aspect of model behavior. We also explore a third method that introduces XAI at the design stage, constructing a model architecture that provides intrinsic interpretability. In this approach, we modify the dense (prediction) layer of the original CNN model and augment the model architecture with parallel branches for prediction and signal reconstruction. The model learns representative waveforms that are used directly to compute the predictions and can be used to explain the model outputs. The new prediction model achieves similar prediction accuracy and has a similar architecture to the original CNN model, but with the added benefit of greater interpretability.
- Publication:
-
AGU Fall Meeting Abstracts
- Pub Date:
- December 2021
- Bibcode:
- 2021AGUFM.S34A..05B