Quasi-Newton methods for machine learning: forget the past, just sample

A. S. Berahas, M. Jahani, Peter Richtarik, M. Takáč

Research output: Contribution to journalArticlepeer-review

13 Scopus citations

Abstract

We present two sampled quasi-Newton methods (sampled LBFGS and sampled LSR1) for solving empirical risk minimization problems that arise in machine learning. Contrary to the classical variants of these methods that sequentially build Hessian or inverse Hessian approximations as the optimization progresses, our proposed methods sample points randomly around the current iterate at every iteration to produce these approximations. As a result, the approximations constructed make use of more reliable (recent and local) information and do not depend on past iterate information that could be significantly stale. Our proposed algorithms are efficient in terms of accessed data points (epochs) and have enough concurrency to take advantage of parallel/distributed computing environments. We provide convergence guarantees for our proposed methods. Numerical tests on a toy classification problem as well as on popular benchmarking binary classification and neural network training tasks reveal that the methods outperform their classical variants.
Original languageEnglish (US)
Pages (from-to)1-37
Number of pages37
JournalOptimization Methods and Software
DOIs
StatePublished - Oct 15 2021

Bibliographical note

KAUST Repository Item: Exported on 2021-10-19
Acknowledgements: This work was partially supported by the U.S. National Science Foundation, under award numbers NSF:CCF:1618717 and NSF:CCF:1740796, Defense Advanced Research Projects Agency (DARPA) Lagrange award PHR-001117S0039, and XSEDE Startup grant IRI180020.

ASJC Scopus subject areas

  • Control and Optimization
  • Software
  • Applied Mathematics

Fingerprint

Dive into the research topics of 'Quasi-Newton methods for machine learning: forget the past, just sample'. Together they form a unique fingerprint.

Cite this