Papers
Topics
Authors
Recent
2000 character limit reached

Approximation in $L^p(μ)$ with deep ReLU neural networks (1904.04789v1)

Published 9 Apr 2019 in math.FA and cs.LG

Abstract: We discuss the expressive power of neural networks which use the non-smooth ReLU activation function $\varrho(x) = \max{0,x}$ by analyzing the approximation theoretic properties of such networks. The existing results mainly fall into two categories: approximation using ReLU networks with a fixed depth, or using ReLU networks whose depth increases with the approximation accuracy. After reviewing these findings, we show that the results concerning networks with fixed depth--- which up to now only consider approximation in $Lp(\lambda)$ for the Lebesgue measure $\lambda$--- can be generalized to approximation in $Lp(\mu)$, for any finite Borel measure $\mu$. In particular, the generalized results apply in the usual setting of statistical learning theory, where one is interested in approximation in $L2(\mathbb{P})$, with the probability measure $\mathbb{P}$ describing the distribution of the data.

Citations (3)

Summary

We haven't generated a summary for this paper yet.

Slide Deck Streamline Icon: https://streamlinehq.com

Whiteboard

Dice Question Streamline Icon: https://streamlinehq.com

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.