Emergent Mind

Stable high-order randomized cubature formulae in arbitrary dimension

(1812.07761)
Published Dec 19, 2018 in math.NA and cs.NA

Abstract

We propose and analyse randomized cubature formulae for the numerical integration of functions with respect to a given probability measure $\mu$ defined on a domain $\Gamma \subseteq \mathbb{R}d$, in any dimension $d$. Each cubature formula is exact on a given finite-dimensional subspace $Vn\subset L2(\Gamma,\mu)$ of dimension $n$, and uses pointwise evaluations of the integrand function $\phi : \Gamma \to \mathbb{R}$ at $m>n$ independent random points. These points are drawn from a suitable auxiliary probability measure that depends on $Vn$. We show that, up to a logarithmic factor, a linear proportionality between $m$ and $n$ with dimension-independent constant ensures stability of the cubature formula with high probability. We also prove error estimates in probability and in expectation for any $n\geq 1$ and $m>n$, thus covering both preasymptotic and asymptotic regimes. Our analysis shows that the expected cubature error decays as $\sqrt{n/m}$ times the $L(\Gamma, \mu)$-best approximation error of $\phi$ in $Vn$. On the one hand, for fixed $n$ and $m\to \infty$ our cubature formula can be seen as a variance reduction technique for a Monte Carlo estimator, and can lead to enormous variance reduction for smooth integrand functions and subspaces $Vn$ with spectral approximation properties. On the other hand, when we let $n,m\to\infty$, our cubature becomes of high order with spectral convergence. As a further contribution, we analyse also another cubature formula whose expected error decays as $\sqrt{1/m}$ times the $L2(\Gamma,\mu)$-best approximation error of $\phi$ in $V_n$, which is asymptotically optimal but with constants that can be larger in the preasymptotic regime. Finally we show that, under a more demanding (at least quadratic) proportionality betweeen $m$ and $n$, the weights of the cubature are positive with high probability.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.