Why Random Pruning Is All We Need to Start Sparse
Abstract
Random masks define surprisingly effective sparse neural network models, as has been shown empirically. The resulting sparse networks can often compete with dense architectures and stateoftheart lottery ticket pruning algorithms, even though they do not rely on computationally expensive prunetrain iterations and can be drawn initially without significant computational overhead. We offer a theoretical explanation of how random masks can approximate arbitrary target networks if they are wider by a logarithmic factor in the inverse sparsity $1 / \log(1/\text{sparsity})$. This overparameterization factor is necessary at least for 3layer random networks, which elucidates the observed degrading performance of random networks at higher sparsity. At moderate to high sparsity levels, however, our results imply that sparser networks are contained within random source networks so that any densetosparse training scheme can be turned into a computationally more efficient sparsetosparse one by constraining the search to a fixed random mask. We demonstrate the feasibility of this approach in experiments for different pruning methods and propose particularly effective choices of initial layerwise sparsity ratios of the random source network. As a special case, we show theoretically and experimentally that random source networks also contain strong lottery tickets.
 Publication:

arXiv eprints
 Pub Date:
 October 2022
 DOI:
 10.48550/arXiv.2210.02412
 arXiv:
 arXiv:2210.02412
 Bibcode:
 2022arXiv221002412G
 Keywords:

 Computer Science  Machine Learning
 EPrint:
 Accepted for publication at ICML, 2023