TY - JOUR
T1 - Gradient Free Parameter Estimation for Hidden Markov Models with Intractable Likelihoods
AU - Ehrlich, Elena
AU - Jasra, Ajay
AU - Kantas, Nikolas
N1 - Generated from Scopus record by KAUST IRTS on 2019-11-20
PY - 2015/6/1
Y1 - 2015/6/1
N2 - In this article we focus on Maximum Likelihood estimation (MLE) for the static model parameters of hidden Markov models (HMMs). We will consider the case where one cannot or does not want to compute the conditional likelihood density of the observation given the hidden state because of increased computational complexity or analytical intractability. Instead we will assume that one may obtain samples from this conditional likelihood and hence use approximate Bayesian computation (ABC) approximations of the original HMM. Although these ABC approximations will induce a bias, this can be controlled to arbitrary precision via a positive parameter ϵ, so that the bias decreases with decreasing ϵ. We first establish that when using an ABC approximation of the HMM for a fixed batch of data, then the bias of the resulting log- marginal likelihood and its gradient is no worse than $\mathcal{O}(n\epsilon)$, where n is the total number of data-points. Therefore, when using gradient methods to perform MLE for the ABC approximation of the HMM, one may expect parameter estimates of reasonable accuracy. To compute an estimate of the unknown and fixed model parameters, we propose a gradient approach based on simultaneous perturbation stochastic approximation (SPSA) and Sequential Monte Carlo (SMC) for the ABC approximation of the HMM. The performance of this method is illustrated using two numerical examples.
AB - In this article we focus on Maximum Likelihood estimation (MLE) for the static model parameters of hidden Markov models (HMMs). We will consider the case where one cannot or does not want to compute the conditional likelihood density of the observation given the hidden state because of increased computational complexity or analytical intractability. Instead we will assume that one may obtain samples from this conditional likelihood and hence use approximate Bayesian computation (ABC) approximations of the original HMM. Although these ABC approximations will induce a bias, this can be controlled to arbitrary precision via a positive parameter ϵ, so that the bias decreases with decreasing ϵ. We first establish that when using an ABC approximation of the HMM for a fixed batch of data, then the bias of the resulting log- marginal likelihood and its gradient is no worse than $\mathcal{O}(n\epsilon)$, where n is the total number of data-points. Therefore, when using gradient methods to perform MLE for the ABC approximation of the HMM, one may expect parameter estimates of reasonable accuracy. To compute an estimate of the unknown and fixed model parameters, we propose a gradient approach based on simultaneous perturbation stochastic approximation (SPSA) and Sequential Monte Carlo (SMC) for the ABC approximation of the HMM. The performance of this method is illustrated using two numerical examples.
UR - http://link.springer.com/10.1007/s11009-013-9357-4
UR - http://www.scopus.com/inward/record.url?scp=84879869736&partnerID=8YFLogxK
U2 - 10.1007/s11009-013-9357-4
DO - 10.1007/s11009-013-9357-4
M3 - Article
SN - 1573-7713
VL - 17
JO - Methodology and Computing in Applied Probability
JF - Methodology and Computing in Applied Probability
IS - 2
ER -