Emergent Mind

On the cost of Bayesian posterior mean strategy for log-concave models

(2010.06420)
Published Oct 8, 2020 in math.PR , math.ST , stat.ML , and stat.TH

Abstract

In this paper, we investigate the problem of computing Bayesian estimators using Langevin Monte-Carlo type approximation. The novelty of this paper is to consider together the statistical and numerical counterparts (in a general log-concave setting). More precisely, we address the following question: given $n$ observations in $\mathbb{R}q$ distributed under an unknown probability $\mathbb{P}_{\theta\star}$ with $\theta\star \in \mathbb{R}d$ , what is the optimal numerical strategy and its cost for the approximation of $\theta\star$ with the Bayesian posterior mean? To answer this question, we establish some quantitative statistical bounds related to the underlying Poincar\'e constant of the model and establish new results about the numerical approximation of Gibbs measures by Cesaro averages of Euler schemes of (over-damped) Langevin diffusions. These last results include in particular some quantitative controls in the weakly convex case based on new bounds on the solution of the related Poisson equation of the diffusion.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.