Description
One of the approaches to building artificial intelligence (AI) is to decipher the princi- ples of the brain function and to employ similar mechanisms for solving cognitive tasks, such as visual perception or natural language understanding, using machines. The recent breakthrough, named deep learning, demonstrated that large multi-layer networks of arti- ficial neural-like computing units attain remarkable performance on some of these tasks. Nevertheless, such artificial networks remain to be very loosely inspired by the brain, which rich structures and mechanisms may further suggest new algorithms or even new paradigms of computation. In this thesis, we explore brain-inspired probabilistic mechanisms, such as neural and synaptic stochasticity, in the context of generative models. The two questions we ask here are: (i) what kind of models can describe a neural learning system built of stochastic components? and (ii) how can we implement such systems e ̆ciently? To give specific answers, we consider two well known models and the corresponding neural architectures: the Naive Bayes model implemented with a winner-take-all spiking neural network and the Boltzmann machine implemented in a spiking or non-spiking fashion. We propose and analyze an e ̆cient neuromorphic implementation of the stochastic neu- ral firing mechanism and study the e ̄ects of synaptic unreliability on learning generative energy-based models implemented with neural networks.
Date made available | 2015 |
---|---|
Publisher | KAUST Research Repository |