Deep ReLU Networks Preserve Expected Length
Abstract
Assessing the complexity of functions computed by a neural network helps us understand how the network will learn and generalize. One natural measure of complexity is how the network distorts length  if the network takes a unitlength curve as input, what is the length of the resulting curve of outputs? It has been widely believed that this length grows exponentially in network depth. We prove that in fact this is not the case: the expected length distortion does not grow with depth, and indeed shrinks slightly, for ReLU networks with standard random initialization. We also generalize this result by proving upper bounds both for higher moments of the length distortion and for the distortion of higherdimensional volumes. These theoretical results are corroborated by our experiments.
 Publication:

arXiv eprints
 Pub Date:
 February 2021
 DOI:
 10.48550/arXiv.2102.10492
 arXiv:
 arXiv:2102.10492
 Bibcode:
 2021arXiv210210492H
 Keywords:

 Statistics  Machine Learning;
 Computer Science  Machine Learning
 EPrint:
 18 pages, 4 figures