Rectangular Flows for Manifold Learning
Abstract
Normalizing flows are invertible neural networks with tractable changeofvolume terms, which allow optimization of their parameters to be efficiently performed via maximum likelihood. However, data of interest are typically assumed to live in some (often unknown) lowdimensional manifold embedded in a highdimensional ambient space. The result is a modelling mismatch since  by construction  the invertibility requirement implies highdimensional support of the learned distribution. Injective flows, mappings from low to highdimensional spaces, aim to fix this discrepancy by learning distributions on manifolds, but the resulting volumechange term becomes more challenging to evaluate. Current approaches either avoid computing this term entirely using various heuristics, or assume the manifold is known beforehand and therefore are not widely applicable. Instead, we propose two methods to tractably calculate the gradient of this term with respect to the parameters of the model, relying on careful use of automatic differentiation and techniques from numerical linear algebra. Both approaches perform endtoend nonlinear manifold learning and density estimation for data projected onto this manifold. We study the tradeoffs between our proposed methods, empirically verify that we outperform approaches ignoring the volumechange term by more accurately learning manifolds and the corresponding distributions on them, and show promising results on outofdistribution detection. Our code is available at https://github.com/layer6ailabs/rectangularflows.
 Publication:

arXiv eprints
 Pub Date:
 June 2021
 arXiv:
 arXiv:2106.01413
 Bibcode:
 2021arXiv210601413C
 Keywords:

 Statistics  Machine Learning;
 Computer Science  Machine Learning
 EPrint:
 NeurIPS 2021 Camera Ready. Code available at https://github.com/layer6ailabs/rectangularflows