Implicit Deep Latent Variable Models for Text Generation
Abstract
Deep latent variable models (LVM) such as variational auto-encoder (VAE) have recently played an important role in text generation. One key factor is the exploitation of smooth latent structures to guide the generation. However, the representation power of VAEs is limited due to two reasons: (1) the Gaussian assumption is often made on the variational posteriors; and meanwhile (2) a notorious "posterior collapse" issue occurs. In this paper, we advocate sample-based representations of variational distributions for natural language, leading to implicit latent features, which can provide flexible representation power compared with Gaussian-based posteriors. We further develop an LVM to directly match the aggregated posterior to the prior. It can be viewed as a natural extension of VAEs with a regularization of maximizing mutual information, mitigating the "posterior collapse" issue. We demonstrate the effectiveness and versatility of our models in various text generation scenarios, including language modeling, unaligned style transfer, and dialog response generation. The source code to reproduce our experimental results is available on GitHub.
- Publication:
-
arXiv e-prints
- Pub Date:
- August 2019
- DOI:
- 10.48550/arXiv.1908.11527
- arXiv:
- arXiv:1908.11527
- Bibcode:
- 2019arXiv190811527F
- Keywords:
-
- Computer Science - Machine Learning;
- Computer Science - Computation and Language;
- Statistics - Machine Learning
- E-Print:
- 13 pages, 8 Tables, 1 Figure, Accepted at 2019 Conference on Empirical Methods in Natural Language Processing (EMNLP 2019)