Stochastic spectral descent for restricted boltzmann machines

David Carlson, Volkan Cevher, Lawrence Carin

Research output: Chapter in Book/Report/Conference proceedingConference contribution

22 Scopus citations

Abstract

Restricted Boltzmann Machines (RBMs) are widely used as building blocks for deep learning models. Learning typically proceeds by using stochastic gradient descent, and the gradients are estimated with sampling methods. However, the gradient estimation is a computational bottleneck, so better use of the gradients will speed up the descent algorithm. To this end, we first derive upper bounds on the RBM cost function, then show that descent methods can have natural advantages by operating in the ℓ∞ and Shatten-∞ norm. We introduce a new method called "Stochastic Spectral Descent" that updates parameters in the normed space. Empirical results show dramatic improvements over stochastic gradient descent, and have only have a fractional increase on the per-iteration cost.
Original languageEnglish (US)
Title of host publicationJournal of Machine Learning Research
PublisherMicrotome [email protected]
Pages111-119
Number of pages9
StatePublished - Jan 1 2015
Externally publishedYes

Bibliographical note

Generated from Scopus record by KAUST IRTS on 2021-02-09

Fingerprint

Dive into the research topics of 'Stochastic spectral descent for restricted boltzmann machines'. Together they form a unique fingerprint.

Cite this