Surprises in adversariallytrained linear regression
Abstract
Stateoftheart machine learning models can be vulnerable to very small input perturbations that are adversarially constructed. Adversarial training is an effective approach to defend against such examples. It is formulated as a minmax problem, searching for the best solution when the training data was corrupted by the worstcase attacks. For linear regression problems, adversarial training can be formulated as a convex problem. We use this reformulation to make two technical contributions: First, we formulate the training problem as an instance of robust regression to reveal its connection to parametershrinking methods, specifically that $\ell_\infty$adversarial training produces sparse solutions. Secondly, we study adversarial training in the overparameterized regime, i.e. when there are more parameters than data. We prove that adversarial training with small disturbances gives the solution with the minimumnorm that interpolates the training data. Ridge regression and lasso approximate such interpolating solutions as their regularization parameter vanishes. By contrast, for adversarial training, the transition into the interpolation regime is abrupt and for nonzero values of disturbance. This result is proved and illustrated with numerical examples.
 Publication:

arXiv eprints
 Pub Date:
 May 2022
 DOI:
 10.48550/arXiv.2205.12695
 arXiv:
 arXiv:2205.12695
 Bibcode:
 2022arXiv220512695R
 Keywords:

 Statistics  Machine Learning;
 Computer Science  Cryptography and Security;
 Computer Science  Machine Learning;
 Electrical Engineering and Systems Science  Signal Processing;
 Mathematics  Statistics Theory