Algorithmic stability and uniform generalization

Ibrahim Alabdulmohsin

Research output: Chapter in Book/Report/Conference proceedingConference contribution

13 Scopus citations


One of the central questions in statistical learning theory is to determine the conditions under which agents can learn from experience. This includes the necessary and sufficient conditions for generalization from a given finite training set to new observations. In this paper, we prove that algorithmic stability in the inference process is equivalent to uniform generalization across all parametric loss functions. We provide various interpretations of this result. For instance, a relationship is proved between stability and data processing, which reveals that algorithmic stability can be improved by post-processing the inferred hypothesis or by augmenting training examples with artificial noise prior to learning. In addition, we establish a relationship between algorithmic stability and the size of the observation space, which provides a formal justification for dimensionality reduction methods. Finally, we connect algorithmic stability to the size of the hypothesis space, which recovers the classical PAC result that the size (complexity) of the hypothesis space should be controlled in order to improve algorithmic stability and improve generalization.
Original languageEnglish (US)
Title of host publication29th Annual Conference on Neural Information Processing Systems, NIPS 2015
PublisherNeural information processing systems foundation
Number of pages9
StatePublished - Jan 1 2015

Bibliographical note

KAUST Repository Item: Exported on 2020-12-24


Dive into the research topics of 'Algorithmic stability and uniform generalization'. Together they form a unique fingerprint.

Cite this