Emergent Mind

Convergence rates for Penalised Least Squares Estimators in PDE-constrained regression problems

(1809.08818)
Published Sep 24, 2018 in math.ST , cs.NA , math.AP , math.NA , and stat.TH

Abstract

We consider PDE constrained nonparametric regression problems in which the parameter $f$ is the unknown coefficient function of a second order elliptic partial differential operator $Lf$, and the unique solution $uf$ of the boundary value problem [Lfu=g1\text{ on } \mathcal O, \quad u=g2 \text{ on }\partial \mathcal O,] is observed corrupted by additive Gaussian white noise. Here $\mathcal O$ is a bounded domain in $\mathbb Rd$ with smooth boundary $\partial \mathcal O$, and $g1, g2$ are given functions defined on $\mathcal O, \partial \mathcal O$, respectively. Concrete examples include $Lfu=\Delta u-2fu$ (Schr\"odinger equation with attenuation potential $f$) and $Lfu=\text{div} (f\nabla u)$ (divergence form equation with conductivity $f$). In both cases, the parameter space [\mathcal F={f\in H\alpha(\mathcal O)| f > 0}, ~\alpha>0, ] where $H\alpha(\mathcal O)$ is the usual order $\alpha$ Sobolev space, induces a set of non-linearly constrained regression functions ${uf: f \in \mathcal F}$. We study Tikhonov-type penalised least squares estimators $\hat f$ for $f$. The penalty functionals are of squared Sobolev-norm type and thus $\hat f$ can also be interpreted as a Bayesian `MAP'-estimator corresponding to some Gaussian process prior. We derive rates of convergence of $\hat f$ and of $u{\hat f}$, to $f, uf$, respectively. We prove that the rates obtained are minimax-optimal in prediction loss. Our bounds are derived from a general convergence rate result for non-linear inverse problems whose forward map satisfies a modulus of continuity condition, a result of independent interest that is applicable also to linear inverse problems, illustrated in an example with the Radon transform.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.