Training Neural Networks is ERcomplete
Abstract
Given a neural network, training data, and a threshold, it was known that it is NPhard to find weights for the neural network such that the total error is below the threshold. We determine the algorithmic complexity of this fundamental problem precisely, by showing that it is ERcomplete. This means that the problem is equivalent, up to polynomialtime reductions, to deciding whether a system of polynomial equations and inequalities with integer coefficients and real unknowns has a solution. If, as widely expected, ER is strictly larger than NP, our work implies that the problem of training neural networks is not even in NP.
 Publication:

arXiv eprints
 Pub Date:
 February 2021
 arXiv:
 arXiv:2102.09798
 Bibcode:
 2021arXiv210209798A
 Keywords:

 Computer Science  Computational Complexity;
 Computer Science  Artificial Intelligence;
 Computer Science  Data Structures and Algorithms;
 Computer Science  Machine Learning;
 Computer Science  Neural and Evolutionary Computing
 EPrint:
 14 pages, 9 figures