Abstract
In recent years there has been a lot of interest in designing principled classification algorithms over multiple cues, based on the intuitive notion that using more features should lead to better performance. In the domain of kernel methods, a principled way to use multiple features is the Multi Kernel Learning (MKL) approach. Here we present a MKL optimization algorithm based on stochastic gradient descent that has a guaranteed convergence rate. We directly solve the MKL problem in the primal formulation. By having a p-norm formulation of MKL, we introduce a parameter that controls the level of sparsity of the solution, while leading to an easier optimization problem. We prove theoretically and experimentally that 1) our algorithm has a faster convergence rate as the number of kernels grows; 2) the training complexity is linear in the number of training examples; 3) very few iterations are sufficient to reach good solutions. Experiments on standard benchmark databases support our claims. © 2012 Francesco Orabona, Luo Jie and Barbara Caputo.
Original language | English (US) |
---|---|
Pages (from-to) | 227-253 |
Number of pages | 27 |
Journal | Journal of Machine Learning Research |
Volume | 13 |
State | Published - Feb 1 2012 |
Externally published | Yes |
Bibliographical note
Generated from Scopus record by KAUST IRTS on 2023-09-25ASJC Scopus subject areas
- Artificial Intelligence
- Software
- Statistics and Probability
- Control and Systems Engineering