Generative Deep Deconvolutional Learning

Yunchen Pu, Xin Yuan, Lawrence Carin

Research output: Contribution to journalArticlepeer-review

20 Downloads (Pure)

Abstract

A generative Bayesian model is developed for deep (multi-layer) convolutional dictionary learning. A novel probabilistic pooling operation is integrated into the deep model, yielding efficient bottom-up and top-down probabilistic learning. After learning the deep convolutional dictionary, testing is implemented via deconvolutional inference. To speed up this inference, a new statistical approach is proposed to project the top-layer dictionary elements to the data level. Following this, only one layer of deconvolution is required during testing. Experimental results demonstrate powerful capabilities of the model to learn multi-layer features from images. Excellent classification results are obtained on both the MNIST and Caltech 101 datasets.
Original languageEnglish (US)
JournalArxiv preprint
StatePublished - Dec 18 2014
Externally publishedYes

Bibliographical note

21 pages, 9 figures, revised version for ICLR 2015

Keywords

  • stat.ML
  • cs.LG

Fingerprint

Dive into the research topics of 'Generative Deep Deconvolutional Learning'. Together they form a unique fingerprint.

Cite this