Abstract
A combination of hierarchical tree-like data structures and data access patterns from fast multipole methods and hierarchical low-rank approximation of linear operators from H-matrix methods appears to form an algorithmic path forward for efficient implementation of many linear algebraic operations of scientific computing at the exascale. The combination provides asymptotically optimal computational and communication complexity and applicability to large classes of operators that commonly arise in scientific computing applications. A convergence of the mathematical theories of the fast multipole and H-matrix methods has been underway for over a decade. We recap this mathematical unification and describe implementation aspects of a hybrid of these two compelling hierarchical algorithms on hierarchical distributed-shared memory architectures, which are likely to be the first to reach the exascale. We present a new communication complexity estimate for fast multipole methods on such architectures. We also show how the data structures and access patterns of H-matrices for low-rank operators map onto those of fast multipole, leading to an algebraically generalized form of fast multipole that compromises none of its architecturally ideal properties.
Original language | English (US) |
---|---|
Pages (from-to) | 62-83 |
Number of pages | 22 |
Journal | Supercomputing Frontiers and Innovations |
Volume | 1 |
Issue number | 1 |
DOIs | |
State | Published - 2014 |
Keywords
- Communication complexity
- Fast multipole methods
- H-matrices
- Hierarchical low-rank approximation
- Sparse solvers
ASJC Scopus subject areas
- Software
- Information Systems
- Hardware and Architecture
- Computer Science Applications
- Computer Networks and Communications
- Computational Theory and Mathematics