Variational f-divergence Minimization
Abstract
Probabilistic models are often trained by maximum likelihood, which corresponds to minimizing a specific f-divergence between the model and data distribution. In light of recent successes in training Generative Adversarial Networks, alternative non-likelihood training criteria have been proposed. Whilst not necessarily statistically efficient, these alternatives may better match user requirements such as sharp image generation. A general variational method for training probabilistic latent variable models using maximum likelihood is well established; however, how to train latent variable models using other f-divergences is comparatively unknown. We discuss a variational approach that, when combined with the recently introduced Spread Divergence, can be applied to train a large class of latent variable models using any f-divergence.
- Publication:
-
arXiv e-prints
- Pub Date:
- July 2019
- DOI:
- 10.48550/arXiv.1907.11891
- arXiv:
- arXiv:1907.11891
- Bibcode:
- 2019arXiv190711891Z
- Keywords:
-
- Statistics - Machine Learning;
- Computer Science - Machine Learning