Thresholding gradient methods in Hilbert spaces: support identification and linear convergence
Abstract
We study $\ell^1$ regularized least squares optimization problem in a separable Hilbert space. We show that the iterative soft-thresholding algorithm (ISTA) converges linearly, without making any assumption on the linear operator into play or on the problem. The result is obtained combining two key concepts: the notion of extended support, a finite set containing the support, and the notion of conditioning over finite dimensional sets. We prove that ISTA identifies the solution extended support after a finite number of iterations, and we derive linear convergence from the conditioning property, which is always satisfied for $\ell^1$ regularized least squares problems. Our analysis extends to the the entire class of thresholding gradient algorithms, for which we provide a conceptually new proof of strong convergence, as well as convergence rates.
- Publication:
-
arXiv e-prints
- Pub Date:
- December 2017
- DOI:
- arXiv:
- arXiv:1712.00357
- Bibcode:
- 2017arXiv171200357G
- Keywords:
-
- Mathematics - Optimization and Control;
- 49K40;
- 49M29;
- 65J10;
- 65J15;
- 65J20;
- 65J22;
- 65K15;
- 90C25;
- 90C46
- E-Print:
- 17 pages, 5 figures