Frank-Wolfe Method is Automatically Adaptive to Error Bound Condition
Abstract
Error bound condition has recently gained revived interest in optimization. It has been leveraged to derive faster convergence for many popular algorithms, including subgradient methods, proximal gradient method and accelerated proximal gradient method. However, it is still unclear whether the Frank-Wolfe (FW) method can enjoy faster convergence under error bound condition. In this short note, we give an affirmative answer to this question. We show that the FW method (with a line search for the step size) for optimization over a strongly convex set is automatically adaptive to the error bound condition of the problem. In particular, the iteration complexity of FW can be characterized by $O(\max(1/\epsilon^{1-\theta}, \log(1/\epsilon)))$ where $\theta\in[0,1]$ is a constant that characterizes the error bound condition. Our results imply that if the constrained set is characterized by a strongly convex function and the objective function can achieve a smaller value outside the considered domain, then the FW method enjoys a fast rate of $O(1/t^2)$.
- Publication:
-
arXiv e-prints
- Pub Date:
- October 2018
- DOI:
- 10.48550/arXiv.1810.04765
- arXiv:
- arXiv:1810.04765
- Bibcode:
- 2018arXiv181004765X
- Keywords:
-
- Mathematics - Optimization and Control