Approximation capabilities of neural networks on unbounded domains
Abstract
In this paper, we prove that a shallow neural network with a monotone sigmoid, ReLU, ELU, Softplus, or LeakyReLU activation function can arbitrarily well approximate any L^p(p>=2) integrable functions defined on R*[0,1]^n. We also prove that a shallow neural network with a sigmoid, ReLU, ELU, Softplus, or LeakyReLU activation function expresses no nonzero integrable function defined on the Euclidean plane. Together with a recent result that the deep ReLU network can arbitrarily well approximate any integrable function on Euclidean spaces, we provide a new perspective on the advantage of multiple hidden layers in the context of ReLU networks. Lastly, we prove that the ReLU network with depth 3 is a universal approximator in L^p(R^n).
- Publication:
-
arXiv e-prints
- Pub Date:
- October 2019
- DOI:
- 10.48550/arXiv.1910.09293
- arXiv:
- arXiv:1910.09293
- Bibcode:
- 2019arXiv191009293W
- Keywords:
-
- Computer Science - Machine Learning;
- Statistics - Machine Learning
- E-Print:
- will appear in Neural Networks