The use of Graphics Processing Units (GPUs) for scientific computing has become mainstream in the last decade. Applications ranging from deep learning to seismic modelling have benefitted from the increase in computational efficiency compared to their equivalent CPU-based implementations. Since many inverse problems in geophysics relies on similar core computations – e.g. dense linear algebra operations, convolutions, FFTs – it is reasonable to expect similar performance gains if GPUs are also leveraged in this context. In this paper we discuss how we have been able to take PyLops, a Python library for matrix-free linear algebra and optimization originally developed for singe-node CPUs, and create a fully compatible GPU backend with the help of CuPy and cuSignal. A benchmark suite of our core operators shows that an average 65x speed-up in computations can be achieved when running computations on a V100 GPU. Moreover, by careful modification of the inner working of the library, end users can obtain such a performance gain at virtually no cost: minimal code changes are required when switching between the CPU and GPU backends, mostly consisting of moving the data vector to the GPU device prior to solving an inverse problem with one of PyLops’ solvers.
|Original language||English (US)|
|Title of host publication||Fifth EAGE Workshop on High Performance Computing for Upstream|
|Publisher||European Association of Geoscientists & Engineers|
|State||Published - 2021|
Bibliographical noteKAUST Repository Item: Exported on 2022-12-12
Acknowledgements: The author thanks King Abdullah University of Science and Technology (KAUST) for funding his work. For computer time, this research used the resources of the Supercomputing Laboratory at King Abdullah University of Science & Technology (KAUST) in Thuwal, Saudi Arabia.