TY - GEN
T1 - Online-batch strongly convex multi kernel learning
AU - Orabona, Francesco
AU - Jie, Luo
AU - Caputo, Barbara
N1 - Generated from Scopus record by KAUST IRTS on 2023-09-25
PY - 2010/8/31
Y1 - 2010/8/31
N2 - Several object categorization algorithms use kernel methods over multiple cues, as they offer a principled approach to combine multiple cues, and to obtain state-of-theart performance. A general drawback of these strategies is the high computational cost during training, that prevents their application to large-scale problems. They also do not provide theoretical guarantees on their convergence rate. Here we present a Multiclass Multi Kernel Learning (MKL) algorithm that obtains state-of-the-art performance in a considerably lower training time. We generalize the standard MKL formulation to introduce a parameter that allows us to decide the level of sparsity of the solution. Thanks to this new setting, we can directly solve the problem in the primal formulation. We prove theoretically and experimentally that 1) our algorithm has a faster convergence rate as the number of kernels grow; 2) the training complexity is linear in the number of training examples; 3) very few iterations are enough to reach good solutions. Experiments on three standard benchmark databases support our claims. ©2010 IEEE.
AB - Several object categorization algorithms use kernel methods over multiple cues, as they offer a principled approach to combine multiple cues, and to obtain state-of-theart performance. A general drawback of these strategies is the high computational cost during training, that prevents their application to large-scale problems. They also do not provide theoretical guarantees on their convergence rate. Here we present a Multiclass Multi Kernel Learning (MKL) algorithm that obtains state-of-the-art performance in a considerably lower training time. We generalize the standard MKL formulation to introduce a parameter that allows us to decide the level of sparsity of the solution. Thanks to this new setting, we can directly solve the problem in the primal formulation. We prove theoretically and experimentally that 1) our algorithm has a faster convergence rate as the number of kernels grow; 2) the training complexity is linear in the number of training examples; 3) very few iterations are enough to reach good solutions. Experiments on three standard benchmark databases support our claims. ©2010 IEEE.
UR - http://ieeexplore.ieee.org/document/5540137/
UR - http://www.scopus.com/inward/record.url?scp=77955993905&partnerID=8YFLogxK
U2 - 10.1109/CVPR.2010.5540137
DO - 10.1109/CVPR.2010.5540137
M3 - Conference contribution
SN - 9781424469840
SP - 787
EP - 794
BT - Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition
ER -