Abstract
Most algorithms for solving optimization problems or finding saddle points of convex-concave functions are fixed-point algorithms. In this work we consider the generic problem of finding a fixed point of an average of operators, or an approximation thereof, in a distributed setting. Our work is motivated by the needs of federated learning. In this context, each local operator models the computations done locally on a mobile device. We investigate two strategies to achieve such a consensus: one based on a fixed number of local steps, and the other based on randomized computations. In both cases, the goal is to limit communication of the locally-computed variables, which is often the bottleneck in distributed frameworks. We perform convergence analysis of both methods and conduct a number of experiments highlighting the benefits of our approach.
Original language | English (US) |
---|---|
Title of host publication | 37th International Conference on Machine Learning, ICML 2020 |
Editors | Hal Daume, Aarti Singh |
Publisher | International Machine Learning Society (IMLS) |
Pages | 6648-6657 |
Number of pages | 10 |
ISBN (Electronic) | 9781713821120 |
State | Published - 2020 |
Event | 37th International Conference on Machine Learning, ICML 2020 - Virtual, Online Duration: Jul 13 2020 → Jul 18 2020 |
Publication series
Name | 37th International Conference on Machine Learning, ICML 2020 |
---|---|
Volume | PartF168147-9 |
Conference
Conference | 37th International Conference on Machine Learning, ICML 2020 |
---|---|
City | Virtual, Online |
Period | 07/13/20 → 07/18/20 |
Bibliographical note
Publisher Copyright:© 2020 37th International Conference on Machine Learning, ICML 2020. All rights reserved.
ASJC Scopus subject areas
- Computational Theory and Mathematics
- Human-Computer Interaction
- Software