Multi Kernel Learning with online-batch optimization

Francesco Orabona, Luo Jie, Barbara Caputo

Research output: Contribution to journalArticlepeer-review

44 Scopus citations

Abstract

In recent years there has been a lot of interest in designing principled classification algorithms over multiple cues, based on the intuitive notion that using more features should lead to better performance. In the domain of kernel methods, a principled way to use multiple features is the Multi Kernel Learning (MKL) approach. Here we present a MKL optimization algorithm based on stochastic gradient descent that has a guaranteed convergence rate. We directly solve the MKL problem in the primal formulation. By having a p-norm formulation of MKL, we introduce a parameter that controls the level of sparsity of the solution, while leading to an easier optimization problem. We prove theoretically and experimentally that 1) our algorithm has a faster convergence rate as the number of kernels grows; 2) the training complexity is linear in the number of training examples; 3) very few iterations are sufficient to reach good solutions. Experiments on standard benchmark databases support our claims. © 2012 Francesco Orabona, Luo Jie and Barbara Caputo.
Original languageEnglish (US)
Pages (from-to)227-253
Number of pages27
JournalJournal of Machine Learning Research
Volume13
StatePublished - Feb 1 2012
Externally publishedYes

Bibliographical note

Generated from Scopus record by KAUST IRTS on 2023-09-25

ASJC Scopus subject areas

  • Artificial Intelligence
  • Software
  • Statistics and Probability
  • Control and Systems Engineering

Fingerprint

Dive into the research topics of 'Multi Kernel Learning with online-batch optimization'. Together they form a unique fingerprint.

Cite this