Randomly Initialized OneLayer Neural Networks Make Data Linearly Separable
Abstract
Recently, neural networks have been shown to perform exceptionally well in transforming two arbitrary sets into two linearly separable sets. Doing this with a randomly initialized neural network is of immense interest because the associated computation is cheaper than using fully trained networks. In this paper, we show that, with sufficient width, a randomly initialized onelayer neural network transforms two sets into two linearly separable sets with high probability. Furthermore, we provide explicit bounds on the required width of the neural network for this to occur. Our first bound is exponential in the input dimension and polynomial in all other parameters, while our second bound is independent of the input dimension, thereby overcoming the curse of dimensionality. We also perform an experimental study comparing the separation capacity of randomly initialized onelayer and twolayer neural networks. With correctly chosen biases, our study shows for lowdimensional data, the twolayer neural network outperforms the onelayer network. However, the opposite is observed for higherdimensional data.
 Publication:

arXiv eprints
 Pub Date:
 May 2022
 arXiv:
 arXiv:2205.11716
 Bibcode:
 2022arXiv220511716G
 Keywords:

 Computer Science  Machine Learning;
 Mathematics  Probability;
 Statistics  Machine Learning