Simultaneous model selection and optimization through parameter-free stochastic learning

Research output: Chapter in Book/Report/Conference proceedingConference contribution

62 Scopus citations

Abstract

Stochastic gradient descent algorithms for training linear and kernel predictors are gaining more and more importance, thanks to their scalability. While various methods have been proposed to speed up their convergence, the model selection phase is often ignored. In fact, in theoretical works most of the time assumptions are made, for example, on the prior knowledge of the norm of the optimal solution, while in the practical world validation methods remain the only viable approach. In this paper, we propose a new kernel-based stochastic gradient descent algorithm that performs model selection while training, with no parameters to tune, nor any form of cross-validation. The algorithm builds on recent advancement in online learning theory for unconstrained settings, to estimate over time the right regularization in a data-dependent way. Optimal rates of convergence are proved under standard smoothness assumptions on the target function as well as preliminary empirical results.
Original languageEnglish (US)
Title of host publicationAdvances in Neural Information Processing Systems
PublisherNeural information processing systems foundation
Pages1116-1124
Number of pages9
StatePublished - Jan 1 2014
Externally publishedYes

Bibliographical note

Generated from Scopus record by KAUST IRTS on 2023-09-25

Fingerprint

Dive into the research topics of 'Simultaneous model selection and optimization through parameter-free stochastic learning'. Together they form a unique fingerprint.

Cite this