Random reshuffling: Simple analysis with vast improvements

Konstantin Mishchenko, Ahmed Khaled, Peter Richtárik

Research output: Contribution to conferencePaperpeer-review

51 Scopus citations

Abstract

Random Reshuffling (RR) is an algorithm for minimizing finite-sum functions that utilizes iterative gradient descent steps in conjunction with data reshuffling. Often contrasted with its sibling Stochastic Gradient Descent (SGD), RR is usually faster in practice and enjoys significant popularity in convex and non-convex optimization. The convergence rate of RR has attracted substantial attention recently and, for strongly convex and smooth functions, it was shown to converge faster than SGD if 1) the stepsize is small, 2) the gradients are bounded, and 3) the number of epochs is large. We remove these 3 assumptions, improve the dependence on the condition number from ?2 to ? (resp. from ? to v?) and, in addition, show that RR has a different type of variance. We argue through theory and experiments that the new variance type gives an additional justification of the superior performance of RR. To go beyond strong convexity, we present several results for non-strongly convex and non-convex objectives. We show that in all cases, our theory improves upon existing literature. Finally, we prove fast convergence of the Shuffle-Once (SO) algorithm, which shuffles the data only once, at the beginning of the optimization process. Our theory for strongly convex objectives tightly matches the known lower bounds for both RR and SO and substantiates the common practical heuristic of shuffling once or only a few times. As a byproduct of our analysis, we also get new results for the Incremental Gradient algorithm (IG), which does not shuffle the data at all.

Original languageEnglish (US)
StatePublished - 2020
Event34th Conference on Neural Information Processing Systems, NeurIPS 2020 - Virtual, Online
Duration: Dec 6 2020Dec 12 2020

Conference

Conference34th Conference on Neural Information Processing Systems, NeurIPS 2020
CityVirtual, Online
Period12/6/2012/12/20

Bibliographical note

Funding Information:
Ahmed Khaled acknowledges internship support from the Optimization and Machine Learning Lab led by Peter Richtárik at KAUST.

Publisher Copyright:
© 2020 Neural information processing systems foundation. All rights reserved.

ASJC Scopus subject areas

  • Computer Networks and Communications
  • Information Systems
  • Signal Processing

Fingerprint

Dive into the research topics of 'Random reshuffling: Simple analysis with vast improvements'. Together they form a unique fingerprint.

Cite this