Deep Neural Networks with Random Gaussian Weights: A Universal Classification Strategy?
Abstract
Three important properties of a classification machinery are: (i) the system preserves the core information of the input data; (ii) the training examples convey information about unseen data; and (iii) the system is able to treat differently points from different classes. In this work we show that these fundamental properties are satisfied by the architecture of deep neural networks. We formally prove that these networks with random Gaussian weights perform a distancepreserving embedding of the data, with a special treatment for inclass and outofclass data. Similar points at the input of the network are likely to have a similar output. The theoretical analysis of deep networks here presented exploits tools used in the compressed sensing and dictionary learning literature, thereby making a formal connection between these important topics. The derived results allow drawing conclusions on the metric learning properties of the network and their relation to its structure, as well as providing bounds on the required size of the training set such that the training examples would represent faithfully the unseen data. The results are validated with stateoftheart trained networks.
 Publication:

IEEE Transactions on Signal Processing
 Pub Date:
 July 2016
 DOI:
 10.1109/TSP.2016.2546221
 arXiv:
 arXiv:1504.08291
 Bibcode:
 2016ITSP...64.3444G
 Keywords:

 Computer Science  Neural and Evolutionary Computing;
 Computer Science  Machine Learning;
 Statistics  Machine Learning;
 62M45;
 I.5.1
 EPrint:
 14 pages, 13 figures