Optimizing memory-bound SYMV kernel on GPU hardware accelerators

Ahmad Abdelfattah, Jack Dongarra, David E. Keyes, Hatem Ltaief

Research output: Chapter in Book/Report/Conference proceedingConference contribution

7 Scopus citations

Abstract

Hardware accelerators are becoming ubiquitous high performance scientific computing. They are capable of delivering an unprecedented level of concurrent execution contexts. High-level programming language extensions (e.g., CUDA), profiling tools (e.g., PAPI-CUDA, CUDA Profiler) are paramount to improve productivity, while effectively exploiting the underlying hardware. We present an optimized numerical kernel for computing the symmetric matrix-vector product on nVidia Fermi GPUs. Due to its inherent memory-bound nature, this kernel is very critical in the tridiagonalization of a symmetric dense matrix, which is a preprocessing step to calculate the eigenpairs. Using a novel design to address the irregular memory accesses by hiding latency and increasing bandwidth, our preliminary asymptotic results show 3.5x and 2.5x fold speedups over the similar CUBLAS 4.0 kernel, and 7-8% and 30% fold improvement over the Matrix Algebra on GPU and Multicore Architectures (MAGMA) library in single and double precision arithmetics, respectively. © 2013 Springer-Verlag.
Original languageEnglish (US)
Title of host publicationHigh Performance Computing for Computational Science - VECPAR 2012
PublisherSpringer Nature
Pages72-79
Number of pages8
ISBN (Print)9783642387173
DOIs
StatePublished - 2013

Bibliographical note

KAUST Repository Item: Exported on 2020-10-01

ASJC Scopus subject areas

  • Theoretical Computer Science
  • General Computer Science

Fingerprint

Dive into the research topics of 'Optimizing memory-bound SYMV kernel on GPU hardware accelerators'. Together they form a unique fingerprint.

Cite this