Abstract
Sparse tensors appear frequently in federated deep learning, either as a direct artifact of the deep neural network's gradients, or, as a result of an explicit sparsification process. Existing communication primitives are agnostic to the challenges of deep learning; consequently, they impose unnecessary communication overhead. This paper introduces DeepReduce, a versatile framework for the compressed communication of sparse tensors, tailored to federated deep learning. DeepReduce decomposes sparse tensors into two sets, values and indices, and allows both independent and combined compression of these sets. We support a variety of standard compressors, such as Deflate for values, and Run-Length Encoding for indices. We also propose two novel compression schemes that achieve superior results: curve-fitting based for values, and bloom-filter based for indices. DeepReduce is orthogonal to existing gradient sparsifiers and can be applied in conjunction with them, transparently to the end-user, to significantly lower the communication overhead. As a proof of concept, we implement our approach on TensorFlow and PyTorch. Our experiments with real models demonstrate that DeepReduce transmits 320% less data than existing sparsifiers, without affecting accuracy.
Original language | English (US) |
---|---|
Title of host publication | 35th Conference on Neural Information Processing Systems, NeurIPS 2021 |
Publisher | Neural information processing systems foundation |
Pages | 21150-21163 |
Number of pages | 14 |
ISBN (Print) | 9781713845393 |
State | Published - Jan 1 2021 |
Bibliographical note
KAUST Repository Item: Exported on 2022-07-01Acknowledgements: Kelly Kostopoulou was supported by the KAUST Visiting Student Research Program. The computing infrastructure was provided by the KAUST Super-computing Lab (KSL).