Surrogate losses for online learning of stepsizes in stochastic non-convex optimization

Zhenxun Zhuang, Ashok Cutkosky, Francesco Orabona

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Stochastic Gradient Descent (SGD) has played a central role in machine learning. However, it requires a carefully hand-picked stepsize for fast convergence, which is notoriously tedious and time-consuming to tune. Over the last several years, a plethora of adaptive gradient-based algorithms have emerged to ameliorate this problem. In this paper, we propose new surrogate losses to cast the problem of learning the optimal stepsizes for the stochastic optimization of a non-convex smooth objective function onto an online convex optimization problem. This allows the use of no-regret online algorithms to compute optimal stepsizes on the fly. In turn, this results in a SGD algorithm with self-tuned stepsizes that guarantees convergence rates that are automatically adaptive to the level of noise.
Original languageEnglish (US)
Title of host publication36th International Conference on Machine Learning, ICML 2019
PublisherInternational Machine Learning Society (IMLS)[email protected]
Pages13215-13226
Number of pages12
ISBN (Print)9781510886988
StatePublished - Jan 1 2019
Externally publishedYes

Bibliographical note

Generated from Scopus record by KAUST IRTS on 2023-09-25

Fingerprint

Dive into the research topics of 'Surrogate losses for online learning of stepsizes in stochastic non-convex optimization'. Together they form a unique fingerprint.

Cite this