Emergent Mind

Abstract

Given $n$ samples of a function $f\colon D\to\mathbb C$ in random points drawn with respect to a measure $\varrhoS$ we develop theoretical analysis of the $L2(D, \varrhoT)$-approximation error. For a parituclar choice of $\varrhoS$ depending on $\varrhoT$, it is known that the weighted least squares method from finite dimensional function spaces $Vm$, $\dim(Vm) = m < \infty$ has the same error as the best approximation in $Vm$ up to a multiplicative constant when given exact samples with logarithmic oversampling. If the source measure $\varrhoS$ and the target measure $\varrhoT$ differ we are in the domain adaptation setting, a subfield of transfer learning. We model the resulting deterioration of the error in our bounds. Further, for noisy samples, our bounds describe the bias-variance trade off depending on the dimension $m$ of the approximation space $Vm$. All results hold with high probability. For demonstration, we consider functions defined on the $d$-dimensional cube given in unifom random samples. We analyze polynomials, the half-period cosine, and a bounded orthonormal basis of the non-periodic Sobolev space $H{\mathrm{mix}}2$. Overcoming numerical issues of this $H_{\text{mix}}2$ basis, this gives a novel stable approximation method with quadratic error decay. Numerical experiments indicate the applicability of our results.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.