Consensusbased optimization methods converge globally
Abstract
In this paper we study consensusbased optimization (CBO), which is a multiagent metaheuristic derivativefree optimization method that can globally minimize nonconvex nonsmooth functions and is amenable to theoretical analysis. Based on an experimentally supported intuition that, on average, CBO performs a gradient descent of the squared Euclidean distance to the global minimizer, we devise a novel technique for proving the convergence to the global minimizer in meanfield law for a rich class of objective functions. The result unveils internal mechanisms of CBO that are responsible for the success of the method. In particular, we prove that CBO performs a convexification of a very large class of optimization problems as the number of optimizing agents goes to infinity. Furthermore, we improve prior analyses by requiring minimal assumptions about the initialization of the method and by covering objectives that are merely locally Lipschitz continuous. As a core component of this analysis, we establish a quantitative nonasymptotic Laplace principle, which may be of independent interest. From the result of CBO convergence in meanfield law, it becomes apparent that the hardness of any global optimization problem is necessarily encoded in the rate of the meanfield approximation, for which we provide a novel probabilistic quantitative estimate. The combination of these results allows to obtain global convergence guarantees of the numerical CBO method with provable polynomial complexity.
 Publication:

arXiv eprints
 Pub Date:
 March 2021
 arXiv:
 arXiv:2103.15130
 Bibcode:
 2021arXiv210315130F
 Keywords:

 Mathematics  Numerical Analysis;
 Mathematics  Analysis of PDEs;
 Mathematics  Optimization and Control;
 65K10;
 90C26;
 90C56;
 35Q90;
 35Q84
 EPrint:
 33 pages, 3 figures