Emergent Mind

Consistency of Bayesian inference with Gaussian process priors in an elliptic inverse problem

(1910.07343)
Published Oct 16, 2019 in math.ST , cs.NA , math.AP , math.NA , and stat.TH

Abstract

For $\mathcal{O}$ a bounded domain in $\mathbb{R}d$ and a given smooth function $g:\mathcal{O}\to\mathbb{R}$, we consider the statistical nonlinear inverse problem of recovering the conductivity $f>0$ in the divergence form equation $$ \nabla\cdot(f\nabla u)=g\ \textrm{on}\ \mathcal{O}, \quad u=0\ \textrm{on}\ \partial\mathcal{O}, $$ from $N$ discrete noisy point evaluations of the solution $u=u_f$ on $\mathcal O$. We study the statistical performance of Bayesian nonparametric procedures based on a flexible class of Gaussian (or hierarchical Gaussian) process priors, whose implementation is feasible by MCMC methods. We show that, as the number $N$ of measurements increases, the resulting posterior distributions concentrate around the true parameter generating the data, and derive a convergence rate $N{-\lambda}, \lambda>0,$ for the reconstruction error of the associated posterior means, in $L2(\mathcal{O})$-distance.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.