Abstract
Online learning algorithms are fast, memory-efficient, easy to implement, and applicable to many prediction problems, including classification, regression, and ranking. Several online algorithms were proposed in the past few decades, some based on additive updates, like the Perceptron, and some on multiplicative updates, like Winnow. A unifying perspective on the design and the analysis of online algorithms is provided by online mirror descent, a general prediction strategy from which most first-order algorithms can be obtained as special cases. We generalize online mirror descent to time-varying regularizers with generic updates. Unlike standard mirror descent, our more general formulation also captures second order algorithms, algorithms for composite losses and algorithms for adaptive filtering. Moreover, we recover, and sometimes improve, known regret bounds as special cases of our analysis using specific regularizers. Finally, we show the power of our approach by deriving a new second order algorithm with a regret bound invariant with respect to arbitrary rescalings of individual features.
Original language | English (US) |
---|---|
Pages (from-to) | 411-435 |
Number of pages | 25 |
Journal | Machine Learning |
Volume | 99 |
Issue number | 3 |
DOIs | |
State | Published - Jun 22 2015 |
Externally published | Yes |
Bibliographical note
Generated from Scopus record by KAUST IRTS on 2023-09-25ASJC Scopus subject areas
- Artificial Intelligence
- Software