On Norm-Agnostic Robustness of Adversarial Training

Bai Li, Changyou Chen, Wenlin Wang, Lawrence Carin

Research output: Contribution to journalArticlepeer-review

29 Downloads (Pure)


Adversarial examples are carefully perturbed in-puts for fooling machine learning models. A well-acknowledged defense method against such examples is adversarial training, where adversarial examples are injected into training data to increase robustness. In this paper, we propose a new attack to unveil an undesired property of the state-of-the-art adversarial training, that is it fails to obtain robustness against perturbations in $\ell_2$ and $\ell_\infty$ norms simultaneously. We discuss a possible solution to this issue and its limitations as well.
Original languageUndefined/Unknown
JournalArxiv preprint
StatePublished - May 15 2019
Externally publishedYes

Bibliographical note

4 pages, 2 figures, presented at the ICML 2019 Workshop on Uncertainty and Robustness in Deep Learning. arXiv admin note: text overlap with arXiv:1809.03113


  • cs.LG
  • cs.CR
  • stat.ML

Cite this