Practical Acceleration of the Condat-Vũ Algorithm
Abstract
The Condat-Vũ algorithm is a widely used primal-dual method for optimizing composite objectives of three functions. Several algorithms for optimizing composite objectives of two functions are special cases of Condat-Vũ, including proximal gradient descent (PGD). It is well-known that PGD exhibits suboptimal performance, and a simple adjustment to PGD can accelerate its convergence rate from $\mathcal{O}(1/T)$ to $\mathcal{O}(1/T^2)$ on convex objectives, and this accelerated rate is optimal. In this work, we show that a simple adjustment to the Condat-Vũ algorithm allows it to recover accelerated PGD (APGD) as a special case, instead of PGD. We prove that this accelerated Condat--Vũ algorithm achieves optimal convergence rates and significantly outperforms the traditional Condat-Vũ algorithm in regimes where the Condat--Vũ algorithm approximates the dynamics of PGD. We demonstrate the effectiveness of our approach in various applications in machine learning and computational imaging.
- Publication:
-
arXiv e-prints
- Pub Date:
- March 2024
- DOI:
- 10.48550/arXiv.2403.17100
- arXiv:
- arXiv:2403.17100
- Bibcode:
- 2024arXiv240317100D
- Keywords:
-
- Mathematics - Optimization and Control