Abstract
Today's large finite element simulations require parallel algorithms to scale on clusters with thousands or tens of thousands of processor cores. We present data structures and algorithms to take advantage of the power of high performance computers in generic finite element codes.
Existing generic finite element libraries often restrict the parallelization to parallel linear algebra routines. This is a limiting factor when solving on more than a few hundreds of cores. We describe routines for distributed storage of all major components coupled with efficient, scalable algorithms. We give an overview of our effort to enable the modern and generic finite element library deal. II to take advantage of the power of large clusters. In particular, we describe the construction of a distributed mesh and develop algorithms to fully parallelize the finite element calculation. Numerical results demonstrate good scalability.
Original language | English (US) |
---|---|
Title of host publication | 17th European MPI Users Group Meeting |
Publisher | SPRINGER-VERLAG BERLIN |
Pages | 122-+ |
State | Published - 2010 |
Externally published | Yes |
Bibliographical note
KAUST Repository Item: Exported on 2022-06-27Acknowledged KAUST grant number(s): KUS-C1-016-04
Acknowledgements: Timo Heister is partly supported by the German Research Foundation (DFG) through GK 1023. Martin Kronbichler is supported by the Graduate School in Mathematics and Computation (FMB). Wolfgang Bangerth was partially supported by Award No. KUS-C1-016-04 made by King Abdul-lah University of Science and Technology (KAUST), by a grant from the NSF-funded Computational Infrastructure in Geodynamics initiative through Award No. EAR-0426271, and by an Alfred P. Sloan Research Fellowship. The computations were done on the Hurr3 cluster of the Institute for Applied Mathematics and Computational Science (IAMCS) at Texas A&M University. Hurr is supported by Award No. KUS-C1-016-04 made by King Abdullah Uni-versity of Science and Technology (KAUST).
This publication acknowledges KAUST support, but has no KAUST affiliated authors.