Abstract
A combination of the sparse coding and transfer learn-
ing techniques was shown to be accurate and robust in
classification tasks where training and testing objects
have a shared feature space but are sampled from differ-
ent underlying distributions, i.e., belong to different do-
mains. The key assumption in such case is that in spite
of the domain disparity, samples from different domains
share some common hidden factors. Previous methods
often assumed that all the objects in the target domain
are unlabeled, and thus the training set solely comprised
objects from the source domain. However, in real world
applications, the target domain often has some labeled
objects, or one can always manually label a small num-
ber of them. In this paper, we explore such possibil-
ity and show how a small number of labeled data in
the target domain can significantly leverage classifica-
tion accuracy of the state-of-the-art transfer sparse cod-
ing methods. We further propose a unified framework
named supervised transfer sparse coding (STSC) which
simultaneously optimizes sparse representation, domain
transfer and classification. Experimental results on three
applications demonstrate that a little manual labeling
and then learning the model in a supervised fashion can
significantly improve classification accuracy.
Original language | English (US) |
---|---|
Title of host publication | Proceedings of the Twenty-Eighth AAAI Conference on Artificial Intelligence |
Publisher | The AAAI Press |
State | Published - Jan 1 2014 |
Bibliographical note
KAUST Repository Item: Exported on 2020-04-23Acknowledgements: the Association for the Advancement of Artificial Intelligence