High-performance bidiagonal reduction using tile algorithms on homogeneous multicore architectures

Hatem Ltaief, Piotr R. Luszczek, Jack Dongarra

Research output: Contribution to journalArticlepeer-review

18 Scopus citations


This article presents a new high-performance bidiagonal reduction (BRD) for homogeneous multicore architectures. This article is an extension of the high-performance tridiagonal reduction implemented by the same authors [Luszczek et al., IPDPS 2011] to the BRD case. The BRD is the first step toward computing the singular value decomposition of a matrix, which is one of the most important algorithms in numerical linear algebra due to its broad impact in computational science. The high performance of the BRD described in this article comes from the combination of four important features: (1) tile algorithms with tile data layout, which provide an efficient data representation in main memory; (2) a two-stage reduction approach that allows to cast most of the computation during the first stage (reduction to band form) into calls to Level 3 BLAS and reduces the memory traffic during the second stage (reduction from band to bidiagonal form) by using high-performance kernels optimized for cache reuse; (3) a data dependence translation layer that maps the general algorithm with column-major data layout into the tile data layout; and (4) a dynamic runtime system that efficiently schedules the newly implemented kernels across the processing units and ensures that the data dependencies are not violated. A detailed analysis is provided to understand the critical impact of the tile size on the total execution time, which also corresponds to the matrix bandwidth size after the reduction of the first stage. The performance results show a significant improvement over currently established alternatives. The new high-performance BRD achieves up to a 30-fold speedup on a 16-core Intel Xeon machine with a 12000×12000 matrix size against the state-of-the-art open source and commercial numerical software packages, namely LAPACK, compiled with optimized and multithreaded BLAS from MKL as well as Intel MKL version 10.2. © 2013 ACM.
Original languageEnglish (US)
Pages (from-to)1-22
Number of pages22
JournalACM Transactions on Mathematical Software
Issue number3
StatePublished - Apr 1 2013

Bibliographical note

KAUST Repository Item: Exported on 2020-10-01
Acknowledgements: This research reported here was supported in part by the National Science Foundation, the Department of Energy, and Microsoft Research.

ASJC Scopus subject areas

  • Software
  • Applied Mathematics


Dive into the research topics of 'High-performance bidiagonal reduction using tile algorithms on homogeneous multicore architectures'. Together they form a unique fingerprint.

Cite this