Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 180 tok/s
Gemini 2.5 Pro 55 tok/s Pro
GPT-5 Medium 34 tok/s Pro
GPT-5 High 37 tok/s Pro
GPT-4o 95 tok/s Pro
Kimi K2 205 tok/s Pro
GPT OSS 120B 433 tok/s Pro
Claude Sonnet 4.5 38 tok/s Pro
2000 character limit reached

Approximation in $L^p(μ)$ with deep ReLU neural networks (1904.04789v1)

Published 9 Apr 2019 in math.FA and cs.LG

Abstract: We discuss the expressive power of neural networks which use the non-smooth ReLU activation function $\varrho(x) = \max{0,x}$ by analyzing the approximation theoretic properties of such networks. The existing results mainly fall into two categories: approximation using ReLU networks with a fixed depth, or using ReLU networks whose depth increases with the approximation accuracy. After reviewing these findings, we show that the results concerning networks with fixed depth--- which up to now only consider approximation in $Lp(\lambda)$ for the Lebesgue measure $\lambda$--- can be generalized to approximation in $Lp(\mu)$, for any finite Borel measure $\mu$. In particular, the generalized results apply in the usual setting of statistical learning theory, where one is interested in approximation in $L2(\mathbb{P})$, with the probability measure $\mathbb{P}$ describing the distribution of the data.

Citations (3)

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.