Fitting Latent Non-Gaussian Models Using Variational Bayes and Laplace Approximations

Rafael Cabral*, David Bolin, Håvard Rue

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review


Latent Gaussian models (LGMs) are perhaps the most commonly used class of models in statistical applications. Nevertheless, in areas ranging from longitudinal studies in biostatistics to geostatistics, it is easy to find datasets that contain inherently non-Gaussian features, such as sudden jumps or spikes, that adversely affect the inferences and predictions made using an LGM. These datasets require more general latent non-Gaussian models (LnGMs) that can handle automatically these non-Gaussian features. However, fast implementation and easy-to-use software are lacking, preventing the broad applicability of LnGMs. In this article, we derive variational Bayes algorithms for fast and scalable inference of LnGMs. The approximation leads to an LGM that downweights extreme events in the latent process, reducing their influence and leading to more robust inferences. It can be applied to a wide range of models, such as autoregressive processes for time series, simultaneous autoregressive models for areal data, and spatial Matérn models. To facilitate Bayesian inference, we introduce the ngvb package, where LGMs implemented in R-INLA can be easily extended to LnGMs by adding a single line of code. Supplementary materials for this article are available online.

Original languageEnglish (US)
StateAccepted/In press - 2024

Bibliographical note

Publisher Copyright:
© 2024 American Statistical Association.


  • Heavy-tailed
  • Hierarchical models
  • Markov random fields
  • Normal-inverse Gaussian
  • Variational inference

ASJC Scopus subject areas

  • Statistics and Probability
  • Statistics, Probability and Uncertainty


Dive into the research topics of 'Fitting Latent Non-Gaussian Models Using Variational Bayes and Laplace Approximations'. Together they form a unique fingerprint.

Cite this