Why are deep nets reversible: A simple theory, with implications for training
Abstract
Generative models for deep learning are promising both to improve understanding of the model, and yield training methods requiring fewer labeled samples. Recent works use generative model approaches to produce the deep net's input given the value of a hidden layer several levels above. However, there is no accompanying "proof of correctness" for the generative model, showing that the feedforward deep net is the correct inference method for recovering the hidden layer given the input. Furthermore, these models are complicated. The current paper takes a more theoretical tack. It presents a very simple generative model for RELU deep nets, with the following characteristics: (i) The generative model is just the reverse of the feedforward net: if the forward transformation at a layer is $A$ then the reverse transformation is $A^T$. (This can be seen as an explanation of the old weight tying idea for denoising autoencoders.) (ii) Its correctness can be proven under a clean theoretical assumption: the edge weights in reallife deep nets behave like random numbers. Under this assumption which is experimentally tested on reallife nets like AlexNet it is formally proved that feed forward net is a correct inference method for recovering the hidden layer. The generative model suggests a simple modification for training: use the generative model to produce synthetic data with labels and include it in the training set. Experiments are shown to support this theory of randomlike deep nets; and that it helps the training.
 Publication:

arXiv eprints
 Pub Date:
 November 2015
 arXiv:
 arXiv:1511.05653
 Bibcode:
 2015arXiv151105653A
 Keywords:

 Computer Science  Machine Learning