Abstract
Gaussian process (GP) models form a core part of probabilistic machine learning. Considerable research effort has been made into attacking three issues with GP models: how to compute efficiently when the number of data is large; how to approximate the posterior when the likelihood is not Gaussian and how to estimate covariance function parameter posteriors. This paper simultaneously addresses these, using a variational approximation to the posterior which is sparse in support of the function but otherwise free-form. The result is a Hybrid Monte-Carlo sampling scheme which allows for a non-Gaussian approximation over the function values and covariance parameters simultaneously, with efficient computations based on inducing-point sparse GPs. Code to replicate each experiment in this paper is available at github.com/sparseMCMC.
Original language | English (US) |
---|---|
Pages | 1648-1656 |
Number of pages | 9 |
State | Published - 2015 |
Event | 29th Annual Conference on Neural Information Processing Systems, NIPS 2015 - Montreal, Canada Duration: Dec 7 2015 → Dec 12 2015 |
Other
Other | 29th Annual Conference on Neural Information Processing Systems, NIPS 2015 |
---|---|
Country/Territory | Canada |
City | Montreal |
Period | 12/7/15 → 12/12/15 |
ASJC Scopus subject areas
- Computer Networks and Communications
- Information Systems
- Signal Processing