Learning Neural Networks with Two Nonlinear Layers in Polynomial Time
Abstract
We give a polynomialtime algorithm for learning neural networks with one layer of sigmoids feeding into any Lipschitz, monotone activation function (e.g., sigmoid or ReLU). We make no assumptions on the structure of the network, and the algorithm succeeds with respect to {\em any} distribution on the unit ball in $n$ dimensions (hidden weight vectors also have unit norm). This is the first assumptionfree, provably efficient algorithm for learning neural networks with two nonlinear layers. Our algorithm {\em Alphatron} is a simple, iterative update rule that combines isotonic regression with kernel methods. It outputs a hypothesis that yields efficient oracle access to interpretable features. It also suggests a new approach to Boolean learning problems via realvalued conditionalmean functions, sidestepping traditional hardness results from computational learning theory. Along these lines, we subsume and improve many longstanding results for PAC learning Boolean functions to the more general, realvalued setting of {\em probabilistic concepts}, a model that (unlike PAC learning) requires noni.i.d. noisetolerance.
 Publication:

arXiv eprints
 Pub Date:
 September 2017
 arXiv:
 arXiv:1709.06010
 Bibcode:
 2017arXiv170906010G
 Keywords:

 Computer Science  Data Structures and Algorithms;
 Computer Science  Machine Learning;
 Statistics  Machine Learning
 EPrint:
 Changed title, included new results