Better Theory for SGD in the Nonconvex World

Ahmed Khaled, Peter Richtárik

Research output: Contribution to journalArticlepeer-review

30 Scopus citations

Abstract

Large-scale nonconvex optimization problems are ubiquitous in modern machine learning, and among practitioners interested in solving them, Stochastic Gradient Descent (SGD) reigns supreme. We revisit the analysis of SGD in the nonconvex setting and propose a new variant of the recently introduced expected smoothness assumption which governs the behavior of the second moment of the stochastic gradient. We show that our assumption is both more general and more reasonable than assumptions made in all prior work. Moreover, our results yield the optimal O(ε−4) rate for finding a stationary point of nonconvex smooth functions, and recover the optimal O(ε−1) rate for finding a global solution if the Polyak-Łojasiewicz condition is satisfied. We compare against convergence rates under convexity and prove a theorem on the convergence of SGD under Quadratic Functional Growth and convexity, which might be of independent interest. Moreover, we perform our analysis in a framework which allows for a detailed study of the effects of a wide array of sampling strategies and minibatch sizes for finite-sum optimization problems. We corroborate our theoretical results with experiments on real and synthetic data.

Original languageEnglish (US)
JournalTransactions on Machine Learning Research
Volume2023-February
StatePublished - Mar 1 2023

Bibliographical note

Publisher Copyright:
© 2023, Transactions on Machine Learning Research. All rights reserved.

ASJC Scopus subject areas

  • Artificial Intelligence
  • Computer Vision and Pattern Recognition

Fingerprint

Dive into the research topics of 'Better Theory for SGD in the Nonconvex World'. Together they form a unique fingerprint.

Cite this