Scaling the Pseudo-Spectral Mountain: Spherical Anelasticity at 10,000 Cores
Abstract
The last decade has witnessed a blossoming in the use of numerical simulations to examine global-scale dynamo processes operating in stellar convection zones. Increasing availability of computational resources has allowed many insights into these phenomena to be gained through the wide application of the Anelastic Spherical Harmonic (ASH) code in particular. ASH has been applied extensively to the study of solar-like stars; most notably to the various dynamo states attainable within such stars and to the processes that drive and maintain the solar differential rotation. Its application has also provided a window into the inner workings of convection zones with a decidedly less shellular geometry, such as the fully convective, low-mass M stars, or the convective cores of high-mass A- and B-type stars. ASH solves the anelastic MHD equations within a pseudo-spectral framework, employing a spherical harmonic decomposition on spherical shells and either a Chebyshev polynomial or finite-difference formulation in the radial direction. The spectral transforms associated with the pseudo-spectral treatment, and the inherent Poisson solve arising from the anelastic formulation, imply that ASH suffers from the same communication drawbacks associated with many other pseudo-spectral methods. Historically, the efficient application of this code has been limited to the use of roughly 2000 cores for problems with 10243 gridpoints, but recently, a thorough restructuring of ASH has allowed for strong scaling of 10243 class problems out to 17,000 cores. These improvements in scalability arise primarily from a careful load balancing of the Poisson solve and its associated communication pathways, as well as from aggregation of the spectral transform communication. I will discuss in detail the current implementation of ASH, accomplished entirely with MPI, and then touch on why an OpenMP hybridization (recently successful in some pseudo-spectral applications) seems unlikely to yield additional scalability gains in this particular instance. I will conclude with some highlights of the new research opportunities now arising in the solar context from this improved scalability. This scaling allows, on one hand, for the efficient computation of low- to mid-resolution problems that require tens of millions of iterations of time integration, such as those seeking to resolve several stellar dynamo cycles. On the other hand, problems that are inherently high resolution in nature, such as MHD simulation of overshooting into a stable radiative zone, or treatment of the solar near-surface shear layer, are now becoming computationally tractable within a global framework.
- Publication:
-
AGU Fall Meeting Abstracts
- Pub Date:
- December 2012
- Bibcode:
- 2012AGUFMDI22A..05F
- Keywords:
-
- 5734 PLANETARY SCIENCES: FLUID PLANETS / Magnetic fields and magnetism;
- 7544 SOLAR PHYSICS;
- ASTROPHYSICS;
- AND ASTRONOMY / Stellar interiors and dynamo theory;
- 3323 ATMOSPHERIC PROCESSES / Large eddy simulation