Abstract
We provide the first importance sampling variants of variance-reduced algorithms for empirical risk minimization with non-convex loss functions. In particular, we analyze non-convex versions of SVRG, SAGA and SARAH. Our methods have the capacity to speed up the training process by an order of magnitude compared to the state of the art on real datasets. Moreover, we also improve upon current mini-batch analysis of these methods by proposing importance sampling for minibatches in this setting. Ours are the first optimal samplings for minibatches in the literature on stochastic optimization. Surprisingly, our approach can in some regimes lead to superlinear speedup with respect to the minibatch size, which is not usually present in stochastic optimization. All the above results follow from a general analysis of the methods which works with arbitrary sampling, i.e., fully general randomized strategy for the selection of subsets of examples to be sampled in each iteration. Finally, we also perform a novel importance sampling analysis of SARAH in the convex setting.
Original language | English (US) |
---|---|
Title of host publication | 36th International Conference on Machine Learning, ICML 2019 |
Publisher | International Machine Learning Society (IMLS) |
Pages | 4913-4921 |
Number of pages | 9 |
ISBN (Electronic) | 9781510886988 |
State | Published - 2019 |
Event | 36th International Conference on Machine Learning, ICML 2019 - Long Beach, United States Duration: Jun 9 2019 → Jun 15 2019 |
Publication series
Name | 36th International Conference on Machine Learning, ICML 2019 |
---|---|
Volume | 2019-June |
Conference
Conference | 36th International Conference on Machine Learning, ICML 2019 |
---|---|
Country/Territory | United States |
City | Long Beach |
Period | 06/9/19 → 06/15/19 |
Bibliographical note
Publisher Copyright:Copyright 2019 by the author(s).
ASJC Scopus subject areas
- Education
- Computer Science Applications
- Human-Computer Interaction