Parametric Adversarial Divergences are Good Task Losses for Generative Modeling
Abstract
Generative modeling of high dimensional data like images is a notoriously difficult and illdefined problem. In particular, how to evaluate a learned generative model is unclear. In this position paper, we argue that adversarial learning, pioneered with generative adversarial networks (GANs), provides an interesting framework to implicitly define more meaningful task losses for generative modeling tasks, such as for generating "visually realistic" images. We refer to those task losses as parametric adversarial divergences and we give two main reasons why we think parametric divergences are good learning objectives for generative modeling. Additionally, we unify the processes of choosing a good structured loss (in structured prediction) and choosing a discriminator architecture (in generative modeling) using statistical decision theory; we are then able to formalize and quantify the intuition that "weaker" losses are easier to learn from, in a specific setting. Finally, we propose two new challenging tasks to evaluate parametric and nonparametric divergences: a qualitative task of generating very highresolution digits, and a quantitative task of learning data that satisfies highlevel algebraic constraints. We use two common divergences to train a generator and show that the parametric divergence outperforms the nonparametric divergence on both the qualitative and the quantitative task.
 Publication:

arXiv eprints
 Pub Date:
 August 2017
 arXiv:
 arXiv:1708.02511
 Bibcode:
 2017arXiv170802511H
 Keywords:

 Computer Science  Machine Learning;
 Statistics  Machine Learning
 EPrint:
 22 pages