Abstract
PetscSF, the communication component of the Portable, Extensible Toolkit for Scientific Computation (PETSc), is designed to provide PETScs communication infrastructure suitable for exascale computers that utilize GPUs and other accelerators. PetscSF provides a simple application programming interface (API) for managing common communication patterns in scientific computations by using a star-forest graph representation. PetscSF supports several implementations based on MPI and NVSHMEM, whose selection is based on the characteristics of the application or the target architecture. An efficient and portable model for network and intra-node communication is essential for implementing large-scale applications. The Message Passing Interface, which has been the de facto standard for distributed memory systems, has developed into a large complex API that does not yet provide high performance on the emerging heterogeneous CPU-GPU-based exascale systems. In this paper, we discuss the design of PetscSF, how it can overcome some difficulties of working directly with MPI on GPUs, and we demonstrate its performance, scalability, and novel features.
Original language | English (US) |
---|---|
Pages (from-to) | 1-1 |
Number of pages | 1 |
Journal | IEEE Transactions on Parallel and Distributed Systems |
DOIs | |
State | Published - 2021 |
Bibliographical note
KAUST Repository Item: Exported on 2021-11-21Acknowledgements: We thank Akhil Langer and Jim Dinan from the NVIDIA NVSHMEM team for their assistance. This work was supported by the Exascale Computing Project (17-SC-20-SC), a collaborative effort of the U.S. Department of Energy Office of Science and the National Nuclear Security Administration, and by the U.S. Department of Energy under Contract DE-AC02-06CH11357 and Office of Science Awards DESC0016140 and DE-AC02-0000011838. This research used resources of the Oak Ridge Leadership Computing Facilities, a DOE Office of Science User Facility supported under Contract DE-AC05-00OR22725.