TY - GEN
T1 - From n to n+1: Multiclass transfer incremental learning
AU - Kuzborskij, Ilja
AU - Orabona, Francesco
AU - Caputo, Barbara
N1 - Generated from Scopus record by KAUST IRTS on 2023-09-25
PY - 2013/11/15
Y1 - 2013/11/15
N2 - Since the seminal work of Thrun [16], the learning to learn paradigm has been defined as the ability of an agent to improve its performance at each task with experience, with the number of tasks. Within the object categorization domain, the visual learning community has actively declined this paradigm in the transfer learning setting. Almost all proposed methods focus on category detection problems, addressing how to learn a new target class from few samples by leveraging over the known source. But if one thinks of learning over multiple tasks, there is a need for multiclass transfer learning algorithms able to exploit previous source knowledge when learning a new class, while at the same time optimizing their overall performance. This is an open challenge for existing transfer learning algorithms. The contribution of this paper is a discriminative method that addresses this issue, based on a Least-Squares Support Vector Machine formulation. Our approach is designed to balance between transferring to the new class and preserving what has already been learned on the source models. Extensive experiments on subsets of publicly available datasets prove the effectiveness of our approach. © 2013 IEEE.
AB - Since the seminal work of Thrun [16], the learning to learn paradigm has been defined as the ability of an agent to improve its performance at each task with experience, with the number of tasks. Within the object categorization domain, the visual learning community has actively declined this paradigm in the transfer learning setting. Almost all proposed methods focus on category detection problems, addressing how to learn a new target class from few samples by leveraging over the known source. But if one thinks of learning over multiple tasks, there is a need for multiclass transfer learning algorithms able to exploit previous source knowledge when learning a new class, while at the same time optimizing their overall performance. This is an open challenge for existing transfer learning algorithms. The contribution of this paper is a discriminative method that addresses this issue, based on a Least-Squares Support Vector Machine formulation. Our approach is designed to balance between transferring to the new class and preserving what has already been learned on the source models. Extensive experiments on subsets of publicly available datasets prove the effectiveness of our approach. © 2013 IEEE.
UR - http://ieeexplore.ieee.org/document/6619275/
UR - http://www.scopus.com/inward/record.url?scp=84887384986&partnerID=8YFLogxK
U2 - 10.1109/CVPR.2013.431
DO - 10.1109/CVPR.2013.431
M3 - Conference contribution
SP - 3358
EP - 3365
BT - Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition
ER -