Negative results for approximation using single layer and multilayer feedforward neural networks
Abstract
We prove a negative result for the approximation of functions defined on compact subsets of $\mathbb{R}^d$ (where $d \geq 2$) using feedforward neural networks with one hidden layer and arbitrary continuous activation function. In a nutshell, this result claims the existence of target functions that are as difficult to approximate using these neural networks as one may want. We also demonstrate an analogous result (for general $d \in \mathbb{N}$) for neural networks with an \emph{arbitrary} number of hidden layers, for activation functions that are either rational functions or continuous splines with finitely many pieces.
- Publication:
-
arXiv e-prints
- Pub Date:
- October 2018
- DOI:
- 10.48550/arXiv.1810.10032
- arXiv:
- arXiv:1810.10032
- Bibcode:
- 2018arXiv181010032A
- Keywords:
-
- Computer Science - Machine Learning;
- Statistics - Machine Learning
- E-Print:
- 12 pages, submitted to a Journal