Quantum variational autoencoder
Abstract
Variational autoencoders (VAEs) are powerful generative models with the salient ability to perform inference. Here, we introduce a quantum variational autoencoder (QVAE): a VAE whose latent generative process is implemented as a quantum Boltzmann machine (QBM). We show that our model can be trained end-to-end by maximizing a well-defined loss-function: a 'quantum' lower-bound to a variational approximation of the log-likelihood. We use quantum Monte Carlo (QMC) simulations to train and evaluate the performance of QVAEs. To achieve the best performance, we first create a VAE platform with discrete latent space generated by a restricted Boltzmann machine. Our model achieves state-of-the-art performance on the MNIST dataset when compared against similar approaches that only involve discrete variables in the generative process. We consider QVAEs with a smaller number of latent units to be able to perform QMC simulations, which are computationally expensive. We show that QVAEs can be trained effectively in regimes where quantum effects are relevant despite training via the quantum bound. Our findings open the way to the use of quantum computers to train QVAEs to achieve competitive performance for generative models. Placing a QBM in the latent space of a VAE leverages the full potential of current and next-generation quantum computers as sampling devices.
- Publication:
-
Quantum Science and Technology
- Pub Date:
- January 2019
- DOI:
- 10.1088/2058-9565/aada1f
- arXiv:
- arXiv:1802.05779
- Bibcode:
- 2019QS&T....4a4001K
- Keywords:
-
- variational autoencoders;
- quantum annealing;
- generative models;
- Quantum Physics;
- Computer Science - Machine Learning;
- Statistics - Machine Learning
- E-Print:
- v2: published version. 13 pages, 3 figures, 2 tables