Abstract
We propose a new framework for deploying Reverse Time Migration (RTM) simulations on distributed-memory systems equipped with multiple GPUs. Our software, TB-RTM, infrastructure engine relies on the StarPU dynamic runtime system to orchestrate the asynchronous scheduling of RTM computational tasks on the underlying resources. Besides dealing with the challenging hardware heterogeneity, TB-RTM supports tasks with different workload characteristics, which stress disparate components of the hardware system. RTM is challenging in that it operates intensively at both ends of the memory hierarchy, with compute kernels running at the highest level of the memory system, possibly in GPU main memory, while I/O kernels are saving solution data to fast storage. We consider how to span the wide performance gap between the two extreme ends of the memory system, i.e., GPU memory and fast storage, on which large-scale RTM simulations routinely execute. To maximize hardware occupancy while maintaining high memory bandwidth throughout the memory subsystem, our framework presents the new-of-core (OOC) feature from StarPU to prefetch data solutions in and out not only from/to the GPU/CPU main memory but also from/to the fast storage system. The OOC technique may trigger opportunities for overlapping expensive data movement with computations. TB-RTM framework addresses this challenging problem of heterogeneity with a systematic approach that is oblivious to the targeted hardware architectures. Our resulting RTM framework can effectively be deployed on massively parallel GPU-based systems, while delivering performance scalability up to 500 GPUs.
Original language | English (US) |
---|---|
Title of host publication | 2019 IEEE International Conference on Cluster Computing (CLUSTER) |
Publisher | IEEE |
ISBN (Print) | 9781728147345 |
DOIs | |
State | Published - Nov 13 2019 |
Bibliographical note
KAUST Repository Item: Exported on 2020-10-01Acknowledgements: This research used resources of the Oak Ridge Leadership Computing Facility at the Oak Ridge National Laboratory, which is supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC05-00OR22725. We are grateful to ORNL’s HPC Engineer George Markomanolis and Prof. Rio Yokota of Tokyo Institute of Technology, Japan for their assistance with the runs on Summit and Tsubame 3.0, respectively. We are also grateful to Dr. Rached Abdelkhalak from the Extreme Computing Research Center, KAUST for the fruitful discussions.