Stability and hypothesis transfer learning

Ilja Kuzborskij, Francesco Orabona

Research output: Chapter in Book/Report/Conference proceedingConference contribution

62 Scopus citations

Abstract

We consider the transfer learning scenario, where the learner does not have access to the source domain directly, but rather operates on the basis of hypotheses induced from it - the Hypothesis Transfer Learning (HTL) problem. Particularly, we conduct a theoretical analysis of HTL by considering the algorithmic stability of a class of HTL algorithms based on Regularized Least Squares with biased regularization. We show that the relatedness of source and target domains accelerates the convergence of the Leave-One-Out error to the generalization error, thus enabling the use of the Leave-One-Out error to find the optimal transfer parameters, even in the presence of a small training set. In case of unrelated domains we also suggest a theoretically principled way to prevent negative transfer, so that in the limit we recover the performance of the algorithm not using any knowledge from the source domain. Copyright 2013 by the author(s).
Original languageEnglish (US)
Title of host publication30th International Conference on Machine Learning, ICML 2013
PublisherInternational Machine Learning Society (IMLS)[email protected]
Pages1979-1987
Number of pages9
StatePublished - Jan 1 2013
Externally publishedYes

Bibliographical note

Generated from Scopus record by KAUST IRTS on 2023-09-25

Fingerprint

Dive into the research topics of 'Stability and hypothesis transfer learning'. Together they form a unique fingerprint.

Cite this