Adversarial training using empirical risk minimization is the state-of-the-art method for defense against adversarial attacks, that is against small additive adversarial perturbations applied to test data leading to misclassification. Despite being successful in practice, understanding generalization properties of adversarial training in classification remains widely open. In this paper, we take the first step in this direction by precisely characterizing the robustness of adversarial training in binary linear classification. Specifically, we consider the high-dimensional regime where the model dimension grows with the size of the training set at a constant ratio. Our results provide exact asymptotics for both standard and adversarial test errors under ℓ∞-norm bounded perturbations in a generative Gaussian-mixture model. We use our sharp error formulae to explain how the adversarial and standard errors depend upon the overparameterization ratio, the data model, and the attack budget. Finally, by comparing with the robust Bayes estimator, our sharp asymptotics allow us to study fundamental limits of adversarial training.
|Original language||English (US)|
|Title of host publication||2022 IEEE International Symposium on Information Theory (ISIT)|
|Number of pages||6|
|State||Published - Aug 3 2022|
Bibliographical noteKAUST Repository Item: Exported on 2022-09-14
Acknowledged KAUST grant number(s): GR8
Acknowledgements: The authors acknowledge support by NSF grants 1909320, 2003035, 193464, 2009030 and a GR8 award from KAUST.
This publication acknowledges KAUST support, but has no KAUST affiliated authors.