Emergent Mind

Boosted optimal weighted least-squares

(1912.07075)
Published Dec 15, 2019 in math.NA , cs.NA , math.ST , and stat.TH

Abstract

This paper is concerned with the approximation of a function $u$ in a given approximation space $Vm$ of dimension $m$ from evaluations of the function at $n$ suitably chosen points. The aim is to construct an approximation of $u$ in $Vm$ which yields an error close to the best approximation error in $Vm$ and using as few evaluations as possible. Classical least-squares regression, which defines a projection in $Vm$ from $n$ random points, usually requires a large $n$ to guarantee a stable approximation and an error close to the best approximation error. This is a major drawback for applications where $u$ is expensive to evaluate. One remedy is to use a weighted least squares projection using $n$ samples drawn from a properly selected distribution. In this paper, we introduce a boosted weighted least-squares method which allows to ensure almost surely the stability of the weighted least squares projection with a sample size close to the interpolation regime $n=m$. It consists in sampling according to a measure associated with the optimization of a stability criterion over a collection of independent $n$-samples, and resampling according to this measure until a stability condition is satisfied. A greedy method is then proposed to remove points from the obtained sample. Quasi-optimality properties are obtained for the weighted least-squares projection, with or without the greedy procedure. The proposed method is validated on numerical examples and compared to state-of-the-art interpolation and weighted least squares methods.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.