Emergent Mind

Proximal Langevin Algorithm: Rapid Convergence Under Isoperimetry

(1911.01469)
Published Nov 4, 2019 in stat.ML , cs.DS , cs.IT , cs.LG , and math.IT

Abstract

We study the Proximal Langevin Algorithm (PLA) for sampling from a probability distribution $\nu = e{-f}$ on $\mathbb{R}n$ under isoperimetry. We prove a convergence guarantee for PLA in Kullback-Leibler (KL) divergence when $\nu$ satisfies log-Sobolev inequality (LSI) and $f$ has bounded second and third derivatives. This improves on the result for the Unadjusted Langevin Algorithm (ULA), and matches the fastest known rate for sampling under LSI (without Metropolis filter) with a better dependence on the LSI constant. We also prove convergence guarantees for PLA in R\'enyi divergence of order $q > 1$ when the biased limit satisfies either LSI or Poincar\'e inequality.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.