Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 164 tok/s
Gemini 2.5 Pro 46 tok/s Pro
GPT-5 Medium 21 tok/s Pro
GPT-5 High 27 tok/s Pro
GPT-4o 72 tok/s Pro
Kimi K2 204 tok/s Pro
GPT OSS 120B 450 tok/s Pro
Claude Sonnet 4.5 34 tok/s Pro
2000 character limit reached

Function values are enough for $L_2$-approximation (1905.02516v5)

Published 7 May 2019 in math.NA, cs.NA, and math.PR

Abstract: We study the $L_2$-approximation of functions from a Hilbert space and compare the sampling numbers with the approximation numbers. The sampling number $e_n$ is the minimal worst case error that can be achieved with $n$ function values, whereas the approximation number $a_n$ is the minimal worst case error that can be achieved with $n$ pieces of arbitrary linear information (like derivatives or Fourier coefficients). We show that [ e_n \,\lesssim\, \sqrt{\frac{1}{k_n} \sum_{j\geq k_n} a_j2}, ] where $k_n \asymp n/\log(n)$. This proves that the sampling numbers decay with the same polynomial rate as the approximation numbers and therefore that function values are basically as powerful as arbitrary linear information if the approximation numbers are square-summable. Our result applies, in particular, to Sobolev spaces $Hs_{\rm mix}(\mathbb{T}d)$ with dominating mixed smoothness $s>1/2$ and we obtain [ e_n \,\lesssim\, n{-s} \log{sd}(n). ] For $d>2s+1$, this improves upon all previous bounds and disproves the prevalent conjecture that Smolyak's (sparse grid) algorithm is optimal.

Citations (64)

Summary

We haven't generated a summary for this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube