TY - GEN
T1 - A Unified Theory of SGD: Variance Reduction, Sampling, Quantization and Coordinate Descent
AU - Gorbunov, Eduard
AU - Hanzely, Filip
AU - Richtarik, Peter
N1 - KAUST Repository Item: Exported on 2021-09-02
PY - 2020
Y1 - 2020
N2 - In this paper we introduce a unified analysis of a large family of variants of proximal stochastic gradient descent (SGD) which so
far have required different intuitions, convergence analyses, have different applications, and which have been developed separately
in various communities. We show that our framework includes methods with and without the following tricks, and their combinations: variance reduction, importance sampling, mini-batch sampling, quantization, and coordinate sub-sampling. As a by-product, we
obtain the first unified theory of SGD and randomized coordinate descent (RCD) methods, the first unified theory of variance reduced
and non-variance-reduced SGD methods, and the first unified theory of quantized and nonquantized methods. A key to our approach
is a parametric assumption on the iterates and stochastic gradients. In a single theorem we establish a linear convergence result under
this assumption and strong-quasi convexity of the loss function. Whenever we recover an existing method as a special case, our theorem gives the best known complexity result. Our approach can be used to motivate the development of new useful methods, and offers pre-proved convergence guarantees. To illustrate the strength of our approach, we develop five new variants of SGD, and through
numerical experiments demonstrate some of their properties.
AB - In this paper we introduce a unified analysis of a large family of variants of proximal stochastic gradient descent (SGD) which so
far have required different intuitions, convergence analyses, have different applications, and which have been developed separately
in various communities. We show that our framework includes methods with and without the following tricks, and their combinations: variance reduction, importance sampling, mini-batch sampling, quantization, and coordinate sub-sampling. As a by-product, we
obtain the first unified theory of SGD and randomized coordinate descent (RCD) methods, the first unified theory of variance reduced
and non-variance-reduced SGD methods, and the first unified theory of quantized and nonquantized methods. A key to our approach
is a parametric assumption on the iterates and stochastic gradients. In a single theorem we establish a linear convergence result under
this assumption and strong-quasi convexity of the loss function. Whenever we recover an existing method as a special case, our theorem gives the best known complexity result. Our approach can be used to motivate the development of new useful methods, and offers pre-proved convergence guarantees. To illustrate the strength of our approach, we develop five new variants of SGD, and through
numerical experiments demonstrate some of their properties.
UR - http://hdl.handle.net/10754/670894
UR - http://proceedings.mlr.press/v108/gorbunov20a/gorbunov20a.pdf
M3 - Conference contribution
SP - 680
EP - 689
BT - Proceedings of the 23rdInternational Conference on Artificial Intelligence and Statistics (AISTATS) 2020, Palermo, Italy.
ER -