Deep Involutive Generative Models for Neural MCMC
Abstract
We introduce deep involutive generative models, a new architecture for deep generative modeling, and use them to define Involutive Neural MCMC, a new approach to fast neural MCMC. An involutive generative model represents a probability kernel $G(\phi \mapsto \phi')$ as an involutive (i.e., selfinverting) deterministic function $f(\phi, \pi)$ on an enlarged state space containing auxiliary variables $\pi$. We show how to make these models volume preserving, and how to use deep volumepreserving involutive generative models to make valid MetropolisHastings updates based on an auxiliary variable scheme with an easytocalculate acceptance ratio. We prove that deep involutive generative models and their volumepreserving special case are universal approximators for probability kernels. This result implies that with enough network capacity and training time, they can be used to learn arbitrarily complex MCMC updates. We define a loss function and optimization algorithm for training parameters given simulated data. We also provide initial experiments showing that Involutive Neural MCMC can efficiently explore multimodal distributions that are intractable for Hybrid Monte Carlo, and can converge faster than ANICEMC, a recently introduced neural MCMC technique.
 Publication:

arXiv eprints
 Pub Date:
 June 2020
 arXiv:
 arXiv:2006.15167
 Bibcode:
 2020arXiv200615167S
 Keywords:

 Statistics  Machine Learning;
 Computer Science  Machine Learning;
 Computer Science  Neural and Evolutionary Computing;
 Mathematics  Statistics Theory
 EPrint:
 13 pages, 6 figures. Revised discussion of the Jacobian determinant factor in the acceptance ratio