Abstract
Overparameterized learning architectures fail to generalize well in the presence of data imbalance even when combined with traditional techniques for mitigating imbalances. This paper focuses on imbalanced classification datasets, in which a small subset of the population - a minority - may contain features that correlate spuriously with the class label. For a parametric family of cross-entropy loss modifications and a representative Gaussian mixture model, we derive non-asymptotic generalization bounds on the worst-group error that shed light on the role of different hyper-parameters. Specifically, we prove that, when appropriately tuned, the recently proposed VS-loss learns a model that is fair towards minorities even when spurious features are strong. On the other hand, alternative heuristics, such as the weighted CE and the LA-loss, can fail dramatically. Compared to previous works, our bounds hold for more general models, they are non-asymptotic, and, they apply even at scenarios of extreme imbalance.
Original language | English (US) |
---|---|
Title of host publication | 2022 IEEE International Symposium on Information Theory (ISIT) |
Publisher | IEEE |
Pages | 121-126 |
Number of pages | 6 |
ISBN (Print) | 9781665421591 |
DOIs | |
State | Published - Jun 26 2022 |
Externally published | Yes |
Bibliographical note
KAUST Repository Item: Exported on 2022-10-07Acknowledged KAUST grant number(s): CRG8
Acknowledgements: This work is supported by the an NSERC Discovery Grant, by an NSF Grant CCF2009030, and by a CRG8 award from KAUST.