Abstract
We present techniques to process large scale-free graphs in distributed memory. Our aim is to scale to trillions of edges, and our research is targeted at leadership class supercomputers and clusters with local non-volatile memory, e.g., NAND Flash. We apply an edge list partitioning technique, designed to accommodate high-degree vertices (hubs) that create scaling challenges when processing scale-free graphs. In addition to partitioning hubs, we use ghost vertices to represent the hubs to reduce communication hotspots. We present a scaling study with three important graph algorithms: Breadth-First Search (BFS), K-Core decomposition, and Triangle Counting. We also demonstrate scalability on BG/P Intrepid by comparing to best known Graph500 results. We show results on two clusters with local NVRAM storage that are capable of traversing trillion-edge scale-free graphs. By leveraging node-local NAND Flash, our approach can process thirty-two times larger datasets with only a 39% performance degradation in Traversed Edges Per Second (TEPS). © 2013 IEEE.
Original language | English (US) |
---|---|
Title of host publication | 2013 IEEE 27th International Symposium on Parallel and Distributed Processing |
Publisher | Institute of Electrical and Electronics Engineers (IEEE) |
Pages | 825-836 |
Number of pages | 12 |
ISBN (Print) | 9781467360661 |
DOIs | |
State | Published - May 2013 |
Externally published | Yes |
Bibliographical note
KAUST Repository Item: Exported on 2020-10-01Acknowledged KAUST grant number(s): KUS-C1-016-04
Acknowledgements: This work was partially performed under the auspices of the U.S. Department of Energy by Lawrence LivermoreNational Laboratory under Contract DE-AC52-07NA27344 (LLNL-CONF-588232). Funding was partially provided byLDRD 11-ERD-008. Portions of experiments were performed at the Livermore Computing facility resources. Thisresearch used resources of the Argonne Leadership Computing Facility (ALCF) at Argonne National Laboratory, whichis supported by the Office of Science of the U.S. Department of Energy under contract DE-AC02-06CH11357. ALCFresources provided through an INCITE 2012 award for the Fault-Oblivious Exascale Computing Environment project.This research supported in part by NSF awards CNS-0615267, CCF-0833199, CCF-0830753, IIS-0917266, IIS-0916053,NSF/DNDO award 2008-DN-077-ARI018-02, by DOE awards DE-FC52-08NA28616, DE-AC02-06CH11357, B575363,B575366, by THECB NHARP award 000512-0097-2009, by Samsung, Chevron, IBM, Intel, Oracle/Sun and by AwardKUS-C1-016-04, made by King Abdullah University of Science and Technology (KAUST). Pearce is supported in partby a Lawrence Scholar fellowship at LLNL.
This publication acknowledges KAUST support, but has no KAUST affiliated authors.