Inference in Deep Networks in High Dimensions
Abstract
Deep generative networks provide a powerful tool for modeling complex data in a wide range of applications. In inverse problems that use these networks as generative priors on data, one must often perform inference of the inputs of the networks from the outputs. Inference is also required for sampling during stochastic training on these generative models. This paper considers inference in a deep stochastic neural network where the parameters (e.g., weights, biases and activation functions) are known and the problem is to estimate the values of the input and hidden units from the output. While several approximate algorithms have been proposed for this task, there are few analytic tools that can provide rigorous guarantees in the reconstruction error. This work presents a novel and computationally tractable outputtoinput inference method called MultiLayer Vector Approximate Message Passing (MLVAMP). The proposed algorithm, derived from expectation propagation, extends earlier AMP methods that are known to achieve the replica predictions for optimality in simple linear inverse problems. Our main contribution shows that the meansquared error (MSE) of MLVAMP can be exactly predicted in a certain large system limit (LSL) where the numbers of layers is fixed and weight matrices are random and orthogonallyinvariant with dimensions that grow to infinity. MLVAMP is thus a principled method for outputtoinput inference in deep networks with a rigorous and precise performance achievability result in high dimensions.
 Publication:

arXiv eprints
 Pub Date:
 June 2017
 arXiv:
 arXiv:1706.06549
 Bibcode:
 2017arXiv170606549F
 Keywords:

 Computer Science  Machine Learning;
 Computer Science  Information Theory;
 Statistics  Machine Learning
 EPrint:
 27 pages