Abstract
In distributed training, communication often emerges as a bottleneck. In response, we introduce Kimad, a solution that offers adaptive gradient compression. By consistently monitoring bandwidth, Kimad refines compression ratios to match specific neural network layer requirements. Our exhaustive tests and proofs confirm Kimad's outstanding performance, establishing it as a benchmark in adaptive compression for distributed deep learning.
Original language | English (US) |
---|---|
Title of host publication | DistributedML 2023 - Proceedings of the 4th International Workshop on Distributed Machine Learning |
Publisher | Association for Computing Machinery, Inc |
Pages | 35-48 |
Number of pages | 14 |
ISBN (Electronic) | 9798400704475 |
DOIs | |
State | Published - Dec 8 2023 |
Event | 4th International Workshop on Distributed Machine Learning, DistributedML 2023 - Paris, France Duration: Dec 8 2023 → … |
Publication series
Name | DistributedML 2023 - Proceedings of the 4th International Workshop on Distributed Machine Learning |
---|
Conference
Conference | 4th International Workshop on Distributed Machine Learning, DistributedML 2023 |
---|---|
Country/Territory | France |
City | Paris |
Period | 12/8/23 → … |
Bibliographical note
Publisher Copyright:© 2023 Owner/Author.
Keywords
- distributed training
- gradient compression
ASJC Scopus subject areas
- Computer Networks and Communications
- Computer Science Applications
- Hardware and Architecture