Abstract
We present a new high-performance framework for dense triangular Basic Linear Algebra Subroutines (BLAS) kernels, ie, triangular matrix-matrix multiplication (TRMM) and triangular solve (TRSM), on various manycore architectures. This is an extension of a previous work on a single GPU by the same authors, presented at the EuroPar'16 conference, in which we demonstrated the effectiveness of recursive formulations in enhancing the performance of these kernels. In this paper, the performance of triangular BLAS kernels on a single GPU is further enhanced by implementing customized in-place CUDA kernels for TRMM and TRSM, which are called at the bottom of the recursion. In addition, a multi-GPU implementation of TRMM and TRSM is proposed and we show an almost linear performance scaling, as the number of GPUs increases. Finally, the algorithmic recursive formulation of these triangular BLAS kernels is in fact oblivious to the targeted hardware architecture. We, therefore, port these recursive kernels to homogeneous x86 hardware architectures by relying on the vendor optimized BLAS implementations. Results reported on various hardware architectures highlight a significant performance improvement against state-of-the-art implementations. These new kernels are freely available in the KAUST BLAS (KBLAS) open-source library at https://github.com/ecrc/kblas.
Original language | English (US) |
---|---|
Article number | e4187 |
Journal | Concurrency and Computation: Practice and Experience |
Volume | 29 |
Issue number | 15 |
DOIs | |
State | Published - Aug 10 2017 |
Bibliographical note
Publisher Copyright:Copyright © 2017 The Authors. Concurrency and Computation: Practice and Experience Published by John Wiley & Sons, Ltd.
Keywords
- KBLAS
- dense triangular matrix computations
- manycore optimizations
- recursive formulation
ASJC Scopus subject areas
- Theoretical Computer Science
- Software
- Computer Science Applications
- Computer Networks and Communications
- Computational Theory and Mathematics