Rapid Convergence of the Unadjusted Langevin Algorithm: Isoperimetry Suffices
Abstract
We study the Unadjusted Langevin Algorithm (ULA) for sampling from a probability distribution $\nu = e^{-f}$ on $\mathbb{R}^n$. We prove a convergence guarantee in Kullback-Leibler (KL) divergence assuming $\nu$ satisfies a log-Sobolev inequality and the Hessian of $f$ is bounded. Notably, we do not assume convexity or bounds on higher derivatives. We also prove convergence guarantees in Rényi divergence of order $q > 1$ assuming the limit of ULA satisfies either the log-Sobolev or Poincaré inequality. We also prove a bound on the bias of the limiting distribution of ULA assuming third-order smoothness of $f$, without requiring isoperimetry.
- Publication:
-
arXiv e-prints
- Pub Date:
- March 2019
- DOI:
- 10.48550/arXiv.1903.08568
- arXiv:
- arXiv:1903.08568
- Bibcode:
- 2019arXiv190308568V
- Keywords:
-
- Computer Science - Data Structures and Algorithms;
- Computer Science - Machine Learning;
- Mathematics - Probability;
- Statistics - Machine Learning
- E-Print:
- v4: Updated discussion and added properties of biased limit v3: Simplified analysis of R\'enyi divergence, improved exposition, and added figures v2: Added analysis of R\'enyi divergence and Poincar\'e assumption