Abstract
As processor clock rates become more dynamic and workloads become more adaptive, the vulnerability to global synchronization that already complicates programming for performance in today's petascale environment will be exacerbated. Algebraic multigrid (AMG), the solver of choice in many large-scale PDE-based simulations, scales well in the weak sense, with fixed problem size per node, on tightly coupled systems when loads are well balanced and core performance is reliable. However, its strong scaling to many cores within a node is challenging. Reducing synchronization and increasing concurrency are vital adaptations of AMG to hybrid architectures. Recent communication-reducing improvements to classical additive AMG by Vassilevski and Yang improve concurrency and increase communication-computation overlap, while retaining convergence properties close to those of standard multiplicative AMG, but remain bulk synchronous.We extend the Vassilevski and Yang additive AMG to asynchronous task-based parallelism using a hybrid MPI+OmpSs (from the Barcelona Supercomputer Center) within a node, along with MPI for internode communications. We implement a tiling approach to decompose the grid hierarchy into parallel units within task containers. We compare against the MPI-only BoomerAMG and the Auxiliary-space Maxwell Solver (AMS) in the hypre library for the 3D Laplacian operator and the electromagnetic diffusion, respectively. In time to solution for a full solve an MPI-OmpSs hybrid improves over an all-MPI approach in strong scaling at full core count (32 threads per single Haswell node of the Cray XC40) and maintains this per node advantage as both weak scale to thousands of cores, with MPI between nodes.
Original language | English (US) |
---|---|
Title of host publication | Proceedings of the Platform for Advanced Scientific Computing Conference on - PASC '17 |
Publisher | Association for Computing Machinery (ACM) |
ISBN (Print) | 9781450350624 |
DOIs | |
State | Published - Jun 23 2017 |
Bibliographical note
KAUST Repository Item: Exported on 2020-10-01Acknowledgements: We thank Hatem Ltaief, Stefano Zampini, and Lisandro Dalcin of the Extreme Computing Research Center at KAUST for their help. We also thank Ulrike Yang from Lawrence Livermore National Laboratory for her useful comments. For performance tests on the Shaheen II Cray XC40 supercomputer we gratefully acknowledge the KAUST Supercomputing Laboratory.