SSRGD: Simple stochastic recursive gradient descent for escaping saddle points

Zhize Li

Research output: Chapter in Book/Report/Conference proceedingConference contribution

22 Scopus citations

Abstract

We analyze stochastic gradient algorithms for optimizing nonconvex problems. In particular, our goal is to find local minima (second-order stationary points) instead of just finding first-order stationary points which may be some bad unstable saddle points. We show that a simple perturbed version of stochastic recursive gradient descent algorithm (called SSRGD) can find an (?, d)-second-order stationary point with Oe(vn/?2 + vn/d4 + n/d3) stochastic gradient complexity for nonconvex finite-sum problems. As a by-product, SSRGD finds an ?-first-order stationary point with O(n + vn/?2) stochastic gradients. These results are almost optimal since Fang et al. [11] provided a lower bound ?(vn/?2) for finding even just an ?-first-order stationary point. We emphasize that SSRGD algorithm for finding second-order stationary points is as simple as for finding first-order stationary points just by adding a uniform perturbation sometimes, while all other algorithms for finding second-order stationary points with similar gradient complexity need to combine with a negative-curvature search subroutine (e.g., Neon2 [4]). Moreover, the simple SSRGD algorithm gets a simpler analysis. Besides, we also extend our results from nonconvex finite-sum problems to nonconvex online (expectation) problems, and prove the corresponding convergence results.
Original languageEnglish (US)
Title of host publication33rd Annual Conference on Neural Information Processing Systems, NeurIPS 2019
PublisherNeural information processing systems foundation
StatePublished - Jan 1 2019

Bibliographical note

KAUST Repository Item: Exported on 2020-10-09

Fingerprint

Dive into the research topics of 'SSRGD: Simple stochastic recursive gradient descent for escaping saddle points'. Together they form a unique fingerprint.

Cite this