Emergent Mind

Optimal sampling and Christoffel functions on general domains

(2010.11040)
Published Oct 21, 2020 in math.NA and cs.NA

Abstract

We consider the problem of reconstructing an unknown function $u\in L2(D,\mu)$ from its evaluations at given sampling points $x1,\dots,xm\in D$, where $D\subset \mathbb Rd$ is a general domain and $\mu$ a probability measure. The approximation is picked from a linear space $Vn$ of interest where $n=\dim(Vn)$. Recent results have revealed that certain weighted least-squares methods achieve near best approximation with a sampling budget $m$ that is proportional to $n$, up to a logarithmic factor $\ln(2n/\varepsilon)$, where $\varepsilon>0$ is a probability of failure. The sampling points should be picked at random according to a well-chosen probability measure $\sigma$ whose density is given by the inverse Christoffel function that depends both on $Vn$ and $\mu$. While this approach is greatly facilitated when $D$ and $\mu$ have tensor product structure, it becomes problematic for domains $D$ with arbitrary geometry since the optimal measure depends on an orthonormal basis of $Vn$ in $L2(D,\mu)$ which is not explicitly given, even for simple polynomial spaces. Therefore sampling according to this measure is not practically feasible. In this paper, we discuss practical sampling strategies, which amount to using a perturbed measure $\widetilde \sigma$ that can be computed in an offline stage, not involving the measurement of $u$. We show that near best approximation is attained by the resulting weighted least-squares method at near-optimal sampling budget and we discuss multilevel approaches that preserve optimality of the cumulated sampling budget when the spaces $Vn$ are iteratively enriched. These strategies rely on the knowledge of a-priori upper bounds on the inverse Christoffel function. We establish such bounds for spaces $Vn$ of multivariate algebraic polynomials, and for general domains $D$.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.