Emergent Mind

Convergence of Langevin MCMC in KL-divergence

(1705.09048)
Published May 25, 2017 in stat.ML

Abstract

Langevin diffusion is a commonly used tool for sampling from a given distribution. In this work, we establish that when the target density $p*$ is such that $\log p*$ is $L$ smooth and $m$ strongly convex, discrete Langevin diffusion produces a distribution $p$ with $KL(p||p*)\leq \epsilon$ in $\tilde{O}(\frac{d}{\epsilon})$ steps, where $d$ is the dimension of the sample space. We also study the convergence rate when the strong-convexity assumption is absent. By considering the Langevin diffusion as a gradient flow in the space of probability distributions, we obtain an elegant analysis that applies to the stronger property of convergence in KL-divergence and gives a conceptually simpler proof of the best-known convergence results in weaker metrics.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.