Recently proposed neural network activation functions such as rectified linear, maxout, and local winner-take-all have allowed for faster and more effective training of deep neural architectures on large and complex datasets. The common trait among these functions is that they implement local competition between small groups of computational units within a layer, so that only part of the network is activated for any given input pattern. In this paper, we attempt to visualize and understand this self-modularization, and suggest a unified explanation for the beneficial properties of such networks. We also show how our insights can be directly useful for efficiently performing retrieval over large datasets using neural networks.
|Original language||English (US)|
|Title of host publication||3rd International Conference on Learning Representations, ICLR 2015 - Conference Track Proceedings|
|Publisher||International Conference on Learning Representations, ICLR|
|State||Published - Jan 1 2015|