Accelerated variance-reduced methods for saddle-point problems

Ekaterina Borodich, Vladislav Tominin, Yaroslav Tominin, Dmitry Kovalev, Alexander Gasnikov, Pavel Dvurechensky

Research output: Contribution to journalArticlepeer-review


We consider composite minimax optimization problems where the goal is to find a saddle-point of a large sum of non-bilinear objective functions augmented by simple composite regularizers for the primal and dual variables. For such problems, under the average-smoothness assumption, we propose accelerated stochastic variance-reduced algorithms with optimal up to logarithmic factors complexity bounds. In particular, we consider strongly-convex-strongly-concave, convex-strongly-concave, and convex-concave objectives. To the best of our knowledge, these are the first nearly-optimal algorithms for this setting.
Original languageEnglish (US)
Pages (from-to)100048
JournalEURO Journal on Computational Optimization
StatePublished - Oct 18 2022

Bibliographical note

KAUST Repository Item: Exported on 2022-10-31
Acknowledgements: This work was supported by a grant for research centers in the field of artificial intelligence, provided by the Analytical Center for the Government of the Russian Federation in accordance with the subsidy agreement (agreement identifier 000000D730321P5Q0002) and the agreement with the Moscow Institute of Physics and Technology dated November 1, 2021 No. 70-2021-00138.


Dive into the research topics of 'Accelerated variance-reduced methods for saddle-point problems'. Together they form a unique fingerprint.

Cite this