Optimal, Stochastic, Non-smooth, Non-convex Optimization through Online-to-Non-convex Conversion

Ashok Cutkosky*, Harsh Mehta*, Francesco Orabona*

*Corresponding author for this work

Research output: Contribution to conferencePaperpeer-review

Abstract

We present new algorithms for optimizing nonsmooth, non-convex stochastic objectives based on a novel analysis technique. This improves the current best-known complexity for finding a (δ, ∊)- stationary point from O(∊-4δ-1) stochastic gradient queries to O(∊-3δ-1), which we also show to be optimal. Our primary technique is a reduction from non-smooth non-convex optimization to online learning, after which our results follow from standard regret bounds in online learning. For deterministic and second-order smooth objectives, applying more advanced optimistic online learning techniques enables a new complexity of O(∊-1.5δ-0.5). Our improved non-smooth analysis also immediately recovers all optimal or best-known results for finding ∊ stationary points of smooth or second-order smooth objectives in both stochastic and deterministic settings.

Original languageEnglish (US)
Pages6643-6670
Number of pages28
StatePublished - 2023
Event40th International Conference on Machine Learning, ICML 2023 - Honolulu, United States
Duration: Jul 23 2023Jul 29 2023

Conference

Conference40th International Conference on Machine Learning, ICML 2023
Country/TerritoryUnited States
CityHonolulu
Period07/23/2307/29/23

Bibliographical note

Publisher Copyright:
© 2023 Proceedings of Machine Learning Research. All rights reserved.

ASJC Scopus subject areas

  • Artificial Intelligence
  • Software
  • Control and Systems Engineering
  • Statistics and Probability

Fingerprint

Dive into the research topics of 'Optimal, Stochastic, Non-smooth, Non-convex Optimization through Online-to-Non-convex Conversion'. Together they form a unique fingerprint.

Cite this