A supervised autoencoder (SAE) for tele-seismic event distance prediction and waveform compression
Abstract
The success of a deep learning model depends on its ability to solve specific tasks while remaining general. To keep a generic model, one could simply pose regularizations, or strategically integrate auxiliary tasks into model training. Supervised AutoEncoder (SAE) is a type of model that can adopt both approaches. An SAE achieves good performance and generalizability by jointly reconstructing the original inputs and solving tasks on the encoding layer. In this study, we aim to train an SAE to extract features that is informative of tele-seismic event distance while preserving rich information from the original waveforms. We do so by adding a station-event distance predictor on the encoding layer and jointly train the predictor with the autoencoder. We model the training labels as a narrow Gaussian probability distribution centered at the apriori event-station distance. The regression loss of the predictor is then defined as the mean square error between the prediction and the label. We trained the SAE on approximately 150,000 three-component teleseismic P-waves that were recorded by more than 500 broadband stations globally. These P-waves were generated by M5.5 to M6.5 earthquakes between the years 2000 and 2016. Our preliminary results suggest the SAE is capable of capturing low-frequency contents of tele-seismic P-waves with at least 10 times compression without losing its generalizability. The dimension-reduced salient features extracted by the SAE can be further used for a wide range of machine learning applications, such as single station location, seismic event association, and waveform quality control.
- Publication:
-
AGU Fall Meeting Abstracts
- Pub Date:
- December 2021
- Bibcode:
- 2021AGUFM.S35C0239C