Abstract
This paper presents the power profile of two high performance dense linear algebra libraries on distributed memory systems, ScaLAPACK and DPLASMA. From the algorithmic perspective, their methodologies are opposite. The former is based on block algorithms and relies on multithreaded BLAS and a two-dimensional block cyclic data distribution to achieve high parallel performance. The latter is based on tile algorithms running on top of a tile data layout and uses fine-grained task parallelism combined with a dynamic distributed scheduler (DAGuE) to leverage distributed memory systems. We present performance results (Gflop/s) as well as the power profile (Watts) of two common dense factorizations needed to solve linear systems of equations, namely Cholesky and QR. The reported numbers show that DPLASMA surpasses ScaLAPACK not only in terms of performance (up to 2X speedup) but also in terms of energy efficiency (up to 62 %). © 2012 Springer-Verlag (outside the USA).
Original language | English (US) |
---|---|
Pages (from-to) | 139-147 |
Number of pages | 9 |
Journal | Computer Science - Research and Development |
Volume | 29 |
Issue number | 2 |
DOIs | |
State | Published - Aug 30 2012 |
Bibliographical note
KAUST Repository Item: Exported on 2020-10-01ASJC Scopus subject areas
- General Computer Science