Emergent Mind

Abstract

Recent findings by Jahn, T. Ullrich, Voigtlaender [10] relate non-linear sampling numbers for the square norm to quantities involving trigonometric best $m-$term approximation errors in the uniform norm. Here we establish new results for sparse trigonometric approximation with respect to the high-dimensional setting, where the influence of the dimension $d$ has to be controlled. In particular, we focus on best $m-$term trigonometric approximation for (unweighted) Wiener classes in $Lq$ and give precise constants. Our main results are approximation guarantees where the number of terms $m$ scales at most quadratic in the inverse accuracy $1/\varepsilon$. Providing a refined version of the classical Nikol'skij inequality we are able to extrapolate the $Lq$-result to $L\infty$ while limiting the influence of the dimension to a $\sqrt{d}$-factor and an additonal $\log$-term in the size of the (rectangular) spectrum. This has consequences for the tractable sampling recovery via $\ell1$-minimization of functions belonging to certain Besov classes with bounded mixed smoothness. This complements polynomial tractability results recently given by Krieg [12].

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.