Deep networks with internal selective attention through feedback connections

Marijn F. Stollenga, Jonathan Masci, Faustino Gomez, Juergen Schmidhuber

Research output: Chapter in Book/Report/Conference proceedingConference contribution

192 Scopus citations

Abstract

Traditional convolutional neural networks (CNN) are stationary and feedforward. They neither change their parameters during evaluation nor use feedback from higher to lower layers. Real brains, however, do. So does our Deep Attention Selective Network (dasNet) architecture. DasNet's feedback structure can dynamically alter its convolutional filter sensitivities during classification. It harnesses the power of sequential processing to improve classification performance, by allowing the network to iteratively focus its internal attention on some of its convolutional filters. Feedback is trained through direct policy search in a huge million-dimensional parameter space, through scalable natural evolution strategies (SNES). On the CIFAR-10 and CIFAR-100 datasets, dasNet outperforms the previous state-of-the-art model on unaugmented datasets.
Original languageEnglish (US)
Title of host publicationAdvances in Neural Information Processing Systems
PublisherNeural information processing systems foundation
Pages3545-3553
Number of pages9
StatePublished - Jan 1 2014
Externally publishedYes

Bibliographical note

Generated from Scopus record by KAUST IRTS on 2022-09-14

Fingerprint

Dive into the research topics of 'Deep networks with internal selective attention through feedback connections'. Together they form a unique fingerprint.

Cite this