Approximation Schemes for ReLU Regression
Abstract
We consider the fundamental problem of ReLU regression, where the goal is to output the best fitting ReLU with respect to square loss given access to draws from some unknown distribution. We give the first efficient, constantfactor approximation algorithm for this problem assuming the underlying distribution satisfies some weak concentration and anticoncentration conditions (and includes, for example, all logconcave distributions). This solves the main open problem of Goel et al., who proved hardness results for any exact algorithm for ReLU regression (up to an additive $\epsilon$). Using more sophisticated techniques, we can improve our results and obtain a polynomialtime approximation scheme for any subgaussian distribution. Given the aforementioned hardness results, these guarantees can not be substantially improved. Our main insight is a new characterization of surrogate losses for nonconvex activations. While prior work had established the existence of convex surrogates for monotone activations, we show that properties of the underlying distribution actually induce strong convexity for the loss, allowing us to relate the global minimum to the activation's Chow parameters.
 Publication:

arXiv eprints
 Pub Date:
 May 2020
 arXiv:
 arXiv:2005.12844
 Bibcode:
 2020arXiv200512844D
 Keywords:

 Computer Science  Machine Learning;
 Computer Science  Data Structures and Algorithms;
 Statistics  Machine Learning