Understanding locally competitive networks

Rupesh Kumar Srivastava, Jonathan Masci, Faustino Gomez, Jürgen Schmidhuber

Research output: Chapter in Book/Report/Conference proceedingConference contribution

15 Scopus citations

Abstract

Recently proposed neural network activation functions such as rectified linear, maxout, and local winner-take-all have allowed for faster and more effective training of deep neural architectures on large and complex datasets. The common trait among these functions is that they implement local competition between small groups of computational units within a layer, so that only part of the network is activated for any given input pattern. In this paper, we attempt to visualize and understand this self-modularization, and suggest a unified explanation for the beneficial properties of such networks. We also show how our insights can be directly useful for efficiently performing retrieval over large datasets using neural networks.
Original languageEnglish (US)
Title of host publication3rd International Conference on Learning Representations, ICLR 2015 - Conference Track Proceedings
PublisherInternational Conference on Learning Representations, ICLR
StatePublished - Jan 1 2015
Externally publishedYes

Bibliographical note

Generated from Scopus record by KAUST IRTS on 2022-09-14

Fingerprint

Dive into the research topics of 'Understanding locally competitive networks'. Together they form a unique fingerprint.

Cite this