Is it enough to optimize CNN architectures on ImageNet?

Lukas Tuggener, Juergen Schmidhuber, Thilo Stadelmann

Research output: Contribution to journalArticlepeer-review

5 Scopus citations

Abstract

Classification performance based on ImageNet is the de-facto standard metric for CNN development. In this work we challenge the notion that CNN architecture design solely based on ImageNet leads to generally effective convolutional neural network (CNN) architectures that perform well on a diverse set of datasets and application domains. To this end, we investigate and ultimately improve ImageNet as a basis for deriving such architectures. We conduct an extensive empirical study for which we train 500 CNN architectures, sampled from the broad AnyNetX design space, on ImageNet as well as 8 additional well-known image classification benchmark datasets from a diverse array of application domains. We observe that the performances of the architectures are highly dataset dependent. Some datasets even exhibit a negative error correlation with ImageNet across all architectures. We show how to significantly increase these correlations by utilizing ImageNet subsets restricted to fewer classes. These contributions can have a profound impact on the way we design future CNN architectures and help alleviate the tilt we see currently in our community with respect to over-reliance on one dataset.
Original languageEnglish (US)
JournalFrontiers in Computer Science
Volume4
DOIs
StatePublished - Nov 15 2022

Bibliographical note

KAUST Repository Item: Exported on 2022-12-13
Acknowledgements: This work has been financially supported by grants 25948.1 PFES-ES Ada (CTI), 34301.1 IP-ICT RealScore (Innosuisse) and ERC Advanced Grant AlgoRNN No. 742870. Open access funding provided by Zurich University of Applied Sciences (ZHAW). We are grateful to Frank P. Schilling for his valuable inputs.

Fingerprint

Dive into the research topics of 'Is it enough to optimize CNN architectures on ImageNet?'. Together they form a unique fingerprint.

Cite this