Robustness and Transferability of Adversarial Attacks on Different Image Classification Neural Networks

Kamilya Smagulova, Lina Bacha, Mohammed E. Fouda*, Rouwaida Kanj, Ahmed Eltawil

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

2 Scopus citations

Abstract

Recent works demonstrated that imperceptible perturbations to input data, known as adversarial examples, can mislead neural networks’ output. Moreover, the same adversarial sample can be transferable and used to fool different neural models. Such vulnerabilities impede the use of neural networks in mission-critical tasks. To the best of our knowledge, this is the first paper that evaluates the robustness of emerging CNN- and transformer-inspired image classifier models such as SpinalNet and Compact Convolutional Transformer (CCT) against popular white- and black-box adversarial attacks imported from the Adversarial Robustness Toolbox (ART). In addition, the adversarial transferability of the generated samples across given models was studied. The tests were carried out on the CIFAR-10 dataset, and the obtained results show that the level of susceptibility of SpinalNet against the same attacks is similar to that of the traditional VGG model, whereas CCT demonstrates better generalization and robustness. The results of this work can be used as a reference for further studies, such as the development of new attacks and defense mechanisms.

Original languageEnglish (US)
Article number592
JournalElectronics (Switzerland)
Volume13
Issue number3
DOIs
StatePublished - Feb 2024

Bibliographical note

Publisher Copyright:
© 2024 by the authors.

Keywords

  • adversarial attacks
  • ART toolbox
  • CCT
  • robustness
  • SpinalNet
  • transferability
  • VGG

ASJC Scopus subject areas

  • Control and Systems Engineering
  • Signal Processing
  • Hardware and Architecture
  • Computer Networks and Communications
  • Electrical and Electronic Engineering

Fingerprint

Dive into the research topics of 'Robustness and Transferability of Adversarial Attacks on Different Image Classification Neural Networks'. Together they form a unique fingerprint.

Cite this