Laplace-domain waveform inversion using various optimizers.
Abstract
Most full waveform inversions use gradient-based optimization methods for rapid convergence of objective function. Deep neural networks also use various gradient-based optimizers such as Momentum optimizer, Nesterov accelerated gradient, Adagrad, RMSProp and Adam optimizer for rapid convergence of loss functions in training step. Based on this similarity, we applied neural network's optimizers to Laplace-domain full waveform inversion algorithm. We compared each optimizer's convergence and inversion results using a salt benchmarking model.
- Publication:
-
AGU Fall Meeting Abstracts
- Pub Date:
- December 2019
- Bibcode:
- 2019AGUFM.S53D0490B
- Keywords:
-
- 0555 Neural networks;
- fuzzy logic;
- machine learning;
- COMPUTATIONAL GEOPHYSICS;
- 1942 Machine learning;
- INFORMATICS;
- 3260 Inverse theory;
- MATHEMATICAL GEOPHYSICS;
- 3275 Uncertainty quantification;
- MATHEMATICAL GEOPHYSICS