TY - GEN
T1 - Algorithmic stability and uniform generalization
AU - Alabdulmohsin, Ibrahim
N1 - KAUST Repository Item: Exported on 2020-12-24
PY - 2015/1/1
Y1 - 2015/1/1
N2 - One of the central questions in statistical learning theory is to determine the conditions under which agents can learn from experience. This includes the necessary and sufficient conditions for generalization from a given finite training set to new observations. In this paper, we prove that algorithmic stability in the inference process is equivalent to uniform generalization across all parametric loss functions. We provide various interpretations of this result. For instance, a relationship is proved between stability and data processing, which reveals that algorithmic stability can be improved by post-processing the inferred hypothesis or by augmenting training examples with artificial noise prior to learning. In addition, we establish a relationship between algorithmic stability and the size of the observation space, which provides a formal justification for dimensionality reduction methods. Finally, we connect algorithmic stability to the size of the hypothesis space, which recovers the classical PAC result that the size (complexity) of the hypothesis space should be controlled in order to improve algorithmic stability and improve generalization.
AB - One of the central questions in statistical learning theory is to determine the conditions under which agents can learn from experience. This includes the necessary and sufficient conditions for generalization from a given finite training set to new observations. In this paper, we prove that algorithmic stability in the inference process is equivalent to uniform generalization across all parametric loss functions. We provide various interpretations of this result. For instance, a relationship is proved between stability and data processing, which reveals that algorithmic stability can be improved by post-processing the inferred hypothesis or by augmenting training examples with artificial noise prior to learning. In addition, we establish a relationship between algorithmic stability and the size of the observation space, which provides a formal justification for dimensionality reduction methods. Finally, we connect algorithmic stability to the size of the hypothesis space, which recovers the classical PAC result that the size (complexity) of the hypothesis space should be controlled in order to improve algorithmic stability and improve generalization.
UR - http://hdl.handle.net/10754/666619
UR - https://www.semanticscholar.org/paper/Algorithmic-Stability-and-Uniform-Generalization-Alabdulmohsin/43120f31b5608402540d17dc79bcdaaa0260674d
UR - http://www.scopus.com/inward/record.url?scp=84965169021&partnerID=8YFLogxK
M3 - Conference contribution
SP - 19
EP - 27
BT - 29th Annual Conference on Neural Information Processing Systems, NIPS 2015
PB - Neural information processing systems foundation
ER -