Abstract
We propose an efficient distributed randomized coordinate descent method for minimizing regularized non-strongly convex loss functions. The method attains the optimal O(1/k2) convergence rate, where k is the iteration counter. The core of the work is the theoretical study of stepsize parameters. We have implemented the method on Archer - the largest super-computer in the UK - and show that the method is capable of solving a (synthetic) LASSO optimization problem with 50 billion variables.
Original language | English (US) |
---|---|
Title of host publication | IEEE International Workshop on Machine Learning for Signal Processing, MLSP |
Editors | Mamadou Mboup, Tulay Adali, Eric Moreau, Jan Larsen |
Publisher | IEEE Computer Society |
ISBN (Electronic) | 9781479936946 |
DOIs | |
State | Published - Nov 14 2014 |
Externally published | Yes |
Event | 2014 24th IEEE International Workshop on Machine Learning for Signal Processing, MLSP 2014 - Reims, France Duration: Sep 21 2014 → Sep 24 2014 |
Publication series
Name | IEEE International Workshop on Machine Learning for Signal Processing, MLSP |
---|---|
ISSN (Print) | 2161-0363 |
ISSN (Electronic) | 2161-0371 |
Other
Other | 2014 24th IEEE International Workshop on Machine Learning for Signal Processing, MLSP 2014 |
---|---|
Country/Territory | France |
City | Reims |
Period | 09/21/14 → 09/24/14 |
Bibliographical note
Publisher Copyright:© 2014 IEEE.
Keywords
- Coordinate descent
- acceleration
- distributed algorithms
ASJC Scopus subject areas
- Human-Computer Interaction
- Signal Processing