The Future is LogGaussian: ResNets and Their InfiniteDepthandWidth Limit at Initialization
Abstract
Theoretical results show that neural networks can be approximated by Gaussian processes in the infinitewidth limit. However, for fully connected networks, it has been previously shown that for any fixed network width, $n$, the Gaussian approximation gets worse as the network depth, $d$, increases. Given that modern networks are deep, this raises the question of how well modern architectures, like ResNets, are captured by the infinitewidth limit. To provide a better approximation, we study ReLU ResNets in the infinitedepthandwidth limit, where both depth and width tend to infinity as their ratio, $d/n$, remains constant. In contrast to the Gaussian infinitewidth limit, we show theoretically that the network exhibits logGaussian behaviour at initialization in the infinitedepthandwidth limit, with parameters depending on the ratio $d/n$. Using Monte Carlo simulations, we demonstrate that even basic properties of standard ResNet architectures are poorly captured by the Gaussian limit, but remarkably well captured by our logGaussian limit. Moreover, our analysis reveals that ReLU ResNets at initialization are hypoactivated: fewer than half of the ReLUs are activated. Additionally, we calculate the interlayer correlations, which have the effect of exponentially increasing the variance of the network output. Based on our analysis, we introduce Balanced ResNets, a simple architecture modification, which eliminates hypoactivation and interlayer correlations and is more amenable to theoretical analysis.
 Publication:

arXiv eprints
 Pub Date:
 June 2021
 arXiv:
 arXiv:2106.04013
 Bibcode:
 2021arXiv210604013L
 Keywords:

 Statistics  Machine Learning;
 Computer Science  Machine Learning