Sparse approximation in learning via neural ODEs
Abstract
We consider the neural ODE and optimal control perspective of supervised learning with $L^1(0,T;\mathbb{R}^{d_u})$ control penalties, where rather than only minimizing a final cost for the state, we integrate this cost over the entire time horizon. Under natural homogeneity assumptions on the nonlinear dynamics, we prove that any optimal control (for this cost) is sparse, in the sense that it vanishes beyond some positive stopping time. We also provide a polynomial stability estimate for the running cost of the state with respect to the time horizon. This can be seen as a \emph{turnpike property} result, for nonsmooth functionals and dynamics, and without any smallness assumptions on the data, both of which are new in the literature. In practical terms, the temporal sparsity and stability results could then be used to discard unnecessary layers in the corresponding residual neural network (ResNet), without removing relevant information.
 Publication:

arXiv eprints
 Pub Date:
 February 2021
 arXiv:
 arXiv:2102.13566
 Bibcode:
 2021arXiv210213566E
 Keywords:

 Computer Science  Machine Learning;
 Mathematics  Optimization and Control;
 Statistics  Machine Learning
 EPrint:
 24 pages, 5 figures