Abstract
The growing literature on “benign overfitting” in overparameterized models has been mostly restricted to regression or binary classification settings; however, most success stories of modern machine learning have been recorded in multiclass settings. Motivated by this discrepancy, we study benign overfitting in multiclass linear classification. Specifically, we consider the following popular training algorithms on separable data: (i) empirical risk minimization (ERM) with cross-entropy loss, which converges to the multiclass support vector machine (SVM) solution; (ii) ERM with least-squares loss, which converges to the min-norm interpolating (MNI) solution; and, (iii) the one-vs-all SVM classifier. Our first key finding is that under a simple sufficient condition, all three algorithms lead to classifiers that interpolate the training data and have equal accuracy. When the data is generated from Gaussian mixtures or a multinomial logistic model, this condition holds under high enough effective overparameterization. Second, we derive novel error bounds on the accuracy of the MNI classifier, thereby showing that all three training algorithms lead to benign overfitting under sufficient overparameterization. Ultimately, our analysis shows that good generalization is possible for SVM solutions beyond the realm in which typical margin-based bounds apply.
Original language | English (US) |
---|---|
Title of host publication | 35th Conference on Neural Information Processing Systems, NeurIPS 2021 |
Publisher | Neural information processing systems foundation |
Pages | 24164-24179 |
Number of pages | 16 |
ISBN (Print) | 9781713845393 |
State | Published - Jan 1 2021 |
Externally published | Yes |
Bibliographical note
KAUST Repository Item: Exported on 2022-06-27Acknowledged KAUST grant number(s): CRG8
Acknowledgements: This work is partially supported by the NSF under Grant Number CCF-2009030 and by a CRG8 award from KAUST. C. Thrampoulidis would also like to recognize his affiliation with the University of California, Santa Barbara. The authors would like to thank the anonymous reviewers for helpful discussion and suggestions.
This publication acknowledges KAUST support, but has no KAUST affiliated authors.