Scalable communication for high-order stencil computations using CUDA-aware MPI
Abstract
Modern compute nodes in high-performance computing provide a tremendous level of parallelism and processing power. However, as arithmetic performance has been observed to increase at a faster rate relative to memory and network bandwidths, optimizing data movement has become critical for achieving strong scaling in many communication-heavy applications. This performance gap has been further accentuated with the introduction of graphics processing units, which can provide by multiple factors higher throughput in data-parallel tasks than central processing units. In this work, we explore the computational aspects of iterative stencil loops and implement a generic communication scheme using CUDA-aware MPI, which we use to accelerate magnetohydrodynamics simulations based on high-order finite differences and third-order Runge-Kutta integration. We put particular focus on improving intra-node locality of workloads. Our GPU implementation scales strongly from one to $64$ devices at $50\%$--$87\%$ of the expected efficiency based on a theoretical performance model. Compared with a multi-core CPU solver, our implementation exhibits $20$--$60\times$ speedup and $9$--$12\times$ improved energy efficiency in compute-bound benchmarks on $16$ nodes.
- Publication:
-
Parallel Computing
- Pub Date:
- July 2022
- DOI:
- 10.1016/J.PARCO.2022.102904
- arXiv:
- arXiv:2103.01597
- Bibcode:
- 2022ParC..11102904P
- Keywords:
-
- Computer Science - Distributed;
- Parallel;
- and Cluster Computing;
- Physics - Computational Physics;
- Physics - Fluid Dynamics
- E-Print:
- 15 pages, 15 figures. Updated with the accepted manuscript. More extensive tests added and wording clarified in several places. Please refer to the published article for the most polished version