Accelerated target-oriented least-squares reverse time migration using optimal mini-batches of shots
Abstract
Speeding-up convergence rates and reducing computational bottlenecks of seismic inverse problems is essential as we move toward large-scale seismic data acquisitions and 3D inversions. We consider the problem of target-oriented least-squares reverse-time migration and introduce an accelerated approach based on optimally selected mini-batches of shots. We derive these subsets using an illumination metric. Given an area of interest, we calculate the effectiveness of shots using the Hessian matrix and the linearized Born modeling operator. We use the ultra-wide-band phase-space beams summation method to calculate the diagonal of the Hessian matrix, where beams are the local basis functions. This technique utilizes the following localization phases that help reduce the enormous computation cost:
We threshold beams with low amplitudes. For a given area of interest, we consider only those beams that pass through the neighborhood. The number of Green's functions that construct each beam is relatively small. These localizations lead to the formation of an efficient target-oriented Hessian along with an a-priori sparse representation of the beam propagators. Lastly, to obtain optimal mini-batches of shots that are the most critical for illuminating a particular target zone, we apply the K-means clustering technique to cluster the calculated shot effectiveness. The resulting clusters contain all the relevant shots that contribute to the image at the target zone. These clusters, also known as mini-batches, are sorted based on shot effectiveness. This one-time shot selection strategy provides the optimal number of sources in each iteration and reduces the computational cost in target-oriented imaging. The gradient calculation from each mini-batch requires only a single GPU and, therefore, enables a scalable parallel implementation of LSRTM on multiple GPUs. We adopt the Adam stochastic optimization method that incorporates information from previous gradients to obtain adaptive learning rates and stable model updates. Combining Adam optimization with the mini-batches, we obtain fast convergence of the misfit function. Finally, we demonstrate the potential of the proposed method using numerical examples.- Publication:
-
AGU Fall Meeting Abstracts
- Pub Date:
- December 2020
- Bibcode:
- 2020AGUFMS053.0003V
- Keywords:
-
- 0555 Neural networks;
- fuzzy logic;
- machine learning;
- COMPUTATIONAL GEOPHYSICS;
- 1910 Data assimilation;
- integration and fusion;
- INFORMATICS;
- 1914 Data mining;
- INFORMATICS;
- 1942 Machine learning;
- INFORMATICS