Firstorder Convergence Theory for WeaklyConvexWeaklyConcave Minmax Problems
Abstract
In this paper, we consider firstorder convergence theory and algorithms for solving a class of nonconvex nonconcave minmax saddlepoint problems, whose objective function is weakly convex in the variables of minimization and weakly concave in the variables of maximization. It has many important applications in machine learning including training Generative Adversarial Nets (GANs). We propose an algorithmic framework motivated by the inexact proximal point method, where the weakly monotone variational inequality (VI) corresponding to the original minmax problem is solved through approximately solving a sequence of strongly monotone VIs constructed by adding a strongly monotone mapping to the original gradient mapping. We prove firstorder convergence to a nearly stationary solution of the original minmax problem of the generic algorithmic framework and establish different rates by employing different algorithms for solving each strongly monotone VI. Experiments verify the convergence theory and also demonstrate the effectiveness of the proposed methods on training GANs.
 Publication:

arXiv eprints
 Pub Date:
 October 2018
 arXiv:
 arXiv:1810.10207
 Bibcode:
 2018arXiv181010207L
 Keywords:

 Mathematics  Optimization and Control;
 Statistics  Machine Learning
 EPrint:
 Accepted by Journal of Machine Learning Research (JMLR)