Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 54 tok/s
Gemini 2.5 Pro 50 tok/s Pro
GPT-5 Medium 18 tok/s Pro
GPT-5 High 31 tok/s Pro
GPT-4o 105 tok/s Pro
Kimi K2 182 tok/s Pro
GPT OSS 120B 466 tok/s Pro
Claude Sonnet 4 40 tok/s Pro
2000 character limit reached

Convergence rates for Penalised Least Squares Estimators in PDE-constrained regression problems (1809.08818v3)

Published 24 Sep 2018 in math.ST, cs.NA, math.AP, math.NA, and stat.TH

Abstract: We consider PDE constrained nonparametric regression problems in which the parameter $f$ is the unknown coefficient function of a second order elliptic partial differential operator $L_f$, and the unique solution $u_f$ of the boundary value problem [L_fu=g_1\text{ on } \mathcal O, \quad u=g_2 \text{ on }\partial \mathcal O,] is observed corrupted by additive Gaussian white noise. Here $\mathcal O$ is a bounded domain in $\mathbb Rd$ with smooth boundary $\partial \mathcal O$, and $g_1, g_2$ are given functions defined on $\mathcal O, \partial \mathcal O$, respectively. Concrete examples include $L_fu=\Delta u-2fu$ (Schr\"odinger equation with attenuation potential $f$) and $L_fu=\text{div} (f\nabla u)$ (divergence form equation with conductivity $f$). In both cases, the parameter space [\mathcal F={f\in H\alpha(\mathcal O)| f > 0}, ~\alpha>0, ] where $H\alpha(\mathcal O)$ is the usual order $\alpha$ Sobolev space, induces a set of non-linearly constrained regression functions ${u_f: f \in \mathcal F}$. We study Tikhonov-type penalised least squares estimators $\hat f$ for $f$. The penalty functionals are of squared Sobolev-norm type and thus $\hat f$ can also be interpreted as a Bayesian `MAP'-estimator corresponding to some Gaussian process prior. We derive rates of convergence of $\hat f$ and of $u_{\hat f}$, to $f, u_f$, respectively. We prove that the rates obtained are minimax-optimal in prediction loss. Our bounds are derived from a general convergence rate result for non-linear inverse problems whose forward map satisfies a modulus of continuity condition, a result of independent interest that is applicable also to linear inverse problems, illustrated in an example with the Radon transform.

Citations (57)
List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-Up Questions

We haven't generated follow-up questions for this paper yet.