On the Asymptotic Efficiency of Approximate Bayesian Computation Estimators
Abstract
Many statistical applications involve models for which it is difficult to evaluate the likelihood, but from which it is relatively easy to sample. Approximate Bayesian computation is a likelihoodfree method for implementing Bayesian inference in such cases. We present results on the asymptotic variance of estimators obtained using approximate Bayesian computation in a largedata limit. Our key assumption is that the data are summarized by a fixeddimensional summary statistic that obeys a central limit theorem. We prove asymptotic normality of the mean of the approximate Bayesian computation posterior. This result also shows that, in terms of asymptotic variance, we should use a summary statistic that is the same dimension as the parameter vector, p; and that any summary statistic of higher dimension can be reduced, through a linear transformation, to dimension p in a way that can only reduce the asymptotic variance of the posterior mean. We look at how the Monte Carlo error of an importance sampling algorithm that samples from the approximate Bayesian computation posterior affects the accuracy of estimators. We give conditions on the importance sampling proposal distribution such that the variance of the estimator will be the same order as that of the maximum likelihood estimator based on the summary statistics used. This suggests an iterative importance sampling algorithm, which we evaluate empirically on a stochastic volatility model.
 Publication:

arXiv eprints
 Pub Date:
 June 2015
 DOI:
 10.48550/arXiv.1506.03481
 arXiv:
 arXiv:1506.03481
 Bibcode:
 2015arXiv150603481L
 Keywords:

 Statistics  Methodology;
 Mathematics  Statistics Theory
 EPrint:
 Main text shortened and proof revised. To appear in Biometrika