We present in this paper a comprehensive performance study of highly efficient extreme scale direct numerical simulations of secondary flows, using an optimized version of Nek5000. Our investigations are conducted on various Cray XC40 systems, using a very high-order spectral element method. Single-node efficiency is achieved by auto-generated assembly implementations of small matrix multiplies and key vector-vector operations, streaming lossless I/O compression, aggressive loop merging, and selective single precision evaluations. Comparative studies across different Cray XC40 systems at scale, Trinity (LANL), Cori (NERSC), and ShaheenII (KAUST) show that a Cray programming environment, network configuration, parallel file system, and burst buffer all have a major impact on the performance. All three systems possess a similar hardware with similar CPU nodes and parallel file system, but they have different theoretical peak network bandwidths, different OSs, and different versions of the programming environment. Our study reveals how these slight configuration differences can be critical in terms of performance of the application. We also find that with 9216 nodes (294 912 cores) on Trinity XC40 the applications sustain petascale performance, as well as 50% of peak memory bandwidth over the entire solver (500 TB/s in aggregate). On 3072 Xeon Phi nodes of Cori, we reach 378 TFLOP/s with an aggregated bandwidth of 310 TB/s, corresponding to time-to-solution 2.11× faster than obtained with the same number of (dual-socket) Xeon nodes.
|Original language||English (US)|
|Journal||Concurrency and Computation: Practice and Experience|
|State||Published - Mar 17 2020|
Bibliographical noteKAUST Repository Item: Exported on 2020-10-01
Acknowledgements: The research reported in this paper was funded by King Abdullah University of Science and Technology (KAUST) in Thuwal, Saudi Arabia. We are thankful for the computing resources of the Supercomputing Laboratory and the Extreme Computing Research Center at KAUST; the National Energy Research Scientific Computing Center, a DOE Office of Science User Facility supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231; and the Trinity project managed and operated by Los Alamos National Laboratory and Sandia National Laboratories.