Neural Implicit Manifold Learning for TopologyAware Density Estimation
Abstract
Natural data observed in $\mathbb{R}^n$ is often constrained to an $m$dimensional manifold $\mathcal{M}$, where $m < n$. This work focuses on the task of building theoretically principled generative models for such data. Current generative models learn $\mathcal{M}$ by mapping an $m$dimensional latent variable through a neural network $f_\theta: \mathbb{R}^m \to \mathbb{R}^n$. These procedures, which we call pushforward models, incur a straightforward limitation: manifolds cannot in general be represented with a single parameterization, meaning that attempts to do so will incur either computational instability or the inability to learn probability densities within the manifold. To remedy this problem, we propose to model $\mathcal{M}$ as a neural implicit manifold: the set of zeros of a neural network. We then learn the probability density within $\mathcal{M}$ with a constrained energybased model, which employs a constrained variant of Langevin dynamics to train and sample from the learned manifold. In experiments on synthetic and natural data, we show that our model can learn manifoldsupported distributions with complex topologies more accurately than pushforward models.
 Publication:

arXiv eprints
 Pub Date:
 June 2022
 DOI:
 10.48550/arXiv.2206.11267
 arXiv:
 arXiv:2206.11267
 Bibcode:
 2022arXiv220611267R
 Keywords:

 Statistics  Machine Learning;
 Computer Science  Machine Learning
 EPrint:
 Accepted to TMLR in 2023. Code: https://github.com/layer6ailabs/implicitmanifolds