Abstract
Distributed optimization methods for large-scale machine learning suffer from a communication bottleneck. It is difficult to reduce this bottleneck while still efficiently and accurately aggregating partial work from different machines. In this paper, we present a novel generalization of the recent communication-efficient primal-dual framework (CoCoA) for distributed optimization. Our framework, CoCoA+, allows for additive combination of local updates to the global parameters at each iteration, whereas previous schemes with convergence guarantees only allow conservative averaging. We give stronger (primal-dual) convergence rate guarantees for both CoCoA as well as our new variants, and generalize the theory for both methods to cover non-smooth convex loss functions. We provide an extensive experimental comparison that shows the markedly improved performance of CoCoA+ on several real-world distributed datasets, especially when scaling up the number of machines.
Original language | English (US) |
---|---|
Title of host publication | 32nd International Conference on Machine Learning, ICML 2015 |
Editors | Francis Bach, David Blei |
Publisher | International Machine Learning Society (IMLS) |
Pages | 1973-1982 |
Number of pages | 10 |
ISBN (Electronic) | 9781510810587 |
State | Published - 2015 |
Externally published | Yes |
Event | 32nd International Conference on Machine Learning, ICML 2015 - Lile, France Duration: Jul 6 2015 → Jul 11 2015 |
Publication series
Name | 32nd International Conference on Machine Learning, ICML 2015 |
---|---|
Volume | 3 |
Other
Other | 32nd International Conference on Machine Learning, ICML 2015 |
---|---|
Country/Territory | France |
City | Lile |
Period | 07/6/15 → 07/11/15 |
Bibliographical note
Publisher Copyright:© Copyright 2015 by International Machine Learning Society (IMLS). All rights reserved.
ASJC Scopus subject areas
- Human-Computer Interaction
- Computer Science Applications