Denoising Seismic Signals Using Wavelet-Transform-Based Neural Networks
Abstract
Seismic waveform data recorded at stations can be thought of as a superposition of the signal from a source of interest and noise from other sources. Frequency-based filtering methods of waveform denoising typically struggle in cases where the targeted signal and noise occupy similar frequency bands. Recently, denoising techniques based on deep learning convolutional neural networks (CNNs), where a recorded waveform is decomposed into signal and noise components, have led to improved results. These CNN methods provide signal and noise masks for the input waveform. The signal and noise components of that waveform are then estimated by multiplying the signal and noise masks with the short-time Fourier transform (STFT) of the input waveform and then converting each component back into the time domain. Advancements in the field of image denoising have shown the benefits of incorporating discrete wavelet transforms (DWTs) into CNN architectures to create multi-level wavelet CNN (MWCNN) models, which allow for higher receptive field sizes while not sacrificing computational efficiency. Here, we utilize a data set of over 270,000 constructed seismograms using recordings from the University of Utah Seismograph Stations network to compare the performance of CNN and MWCNN denoising models. Evaluation of both models on test data shows that the MWCNN model (average cross-correlation, CC, value of 0.85) outperforms the CNN model (average CC value of 0.75) in its ability to recover the ground truth signal component with little amplitude distortion. Model evaluation on real data shows that both the CNN and MWCNN models outperform standard band-pass filtering with average signal-to-noise ratio (SNR) improvements of ~4.8 and ~8.8 dB, respectively. An initial evaluation of the MWCNN model on continuous data suggests the denoiser can also be used to improve event detection capabilities.
- Publication:
-
AGU Fall Meeting Abstracts
- Pub Date:
- December 2022
- Bibcode:
- 2022AGUFM.S52E0099Q