This paper proposes a new type of generative model that is able to quickly learn a latent representation without an encoder. This is achieved by initialising a latent vector with zeros, then using gradients of the data fitting loss with respect to this zero vector as new latent points. The approach has similar characteristics to autoencoders but with a simpler naturally balanced architecture, and is demonstrated in a variational autoencoder equivalent that permits sampling. This also allows implicit representation networks to learn a space of implicit functions without requiring a hypernetwork, retaining their representation advantages with fewer parameters.
- Pub Date:
- July 2020
- Computer Science - Computer Vision and Pattern Recognition;
- Computer Science - Machine Learning;
- 68T01 (Primary);
- 68T07 (Secondary);
- 6 pages, 9 figures, generalised to non-implicit functions and added new experiments