Abstract
Stochastic Gradient Descent (SGD) has played a central role in machine learning. However, it requires a carefully hand-picked stepsize for fast convergence, which is notoriously tedious and time-consuming to tune. Over the last several years, a plethora of adaptive gradient-based algorithms have emerged to ameliorate this problem. In this paper, we propose new surrogate losses to cast the problem of learning the optimal stepsizes for the stochastic optimization of a non-convex smooth objective function onto an online convex optimization problem. This allows the use of no-regret online algorithms to compute optimal stepsizes on the fly. In turn, this results in a SGD algorithm with self-tuned stepsizes that guarantees convergence rates that are automatically adaptive to the level of noise.
Original language | English (US) |
---|---|
Title of host publication | 36th International Conference on Machine Learning, ICML 2019 |
Publisher | International Machine Learning Society (IMLS)[email protected] |
Pages | 13215-13226 |
Number of pages | 12 |
ISBN (Print) | 9781510886988 |
State | Published - Jan 1 2019 |
Externally published | Yes |