Fast distributed coordinate descent for non-strongly convex losses

Olivier Fercoq, Zheng Qu, Peter Richtárik, Martin Takáč

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

31 Scopus citations

Abstract

We propose an efficient distributed randomized coordinate descent method for minimizing regularized non-strongly convex loss functions. The method attains the optimal O(1/k2) convergence rate, where k is the iteration counter. The core of the work is the theoretical study of stepsize parameters. We have implemented the method on Archer - the largest super-computer in the UK - and show that the method is capable of solving a (synthetic) LASSO optimization problem with 50 billion variables.

Original languageEnglish (US)
Title of host publicationIEEE International Workshop on Machine Learning for Signal Processing, MLSP
EditorsMamadou Mboup, Tulay Adali, Eric Moreau, Jan Larsen
PublisherIEEE Computer Society
ISBN (Electronic)9781479936946
DOIs
StatePublished - Nov 14 2014
Externally publishedYes
Event2014 24th IEEE International Workshop on Machine Learning for Signal Processing, MLSP 2014 - Reims, France
Duration: Sep 21 2014Sep 24 2014

Publication series

NameIEEE International Workshop on Machine Learning for Signal Processing, MLSP
ISSN (Print)2161-0363
ISSN (Electronic)2161-0371

Other

Other2014 24th IEEE International Workshop on Machine Learning for Signal Processing, MLSP 2014
Country/TerritoryFrance
CityReims
Period09/21/1409/24/14

Bibliographical note

Publisher Copyright:
© 2014 IEEE.

Keywords

  • Coordinate descent
  • acceleration
  • distributed algorithms

ASJC Scopus subject areas

  • Human-Computer Interaction
  • Signal Processing

Fingerprint

Dive into the research topics of 'Fast distributed coordinate descent for non-strongly convex losses'. Together they form a unique fingerprint.

Cite this