Emergent Mind

Approximation in $L^p(μ)$ with deep ReLU neural networks

(1904.04789)
Published Apr 9, 2019 in math.FA and cs.LG

Abstract

We discuss the expressive power of neural networks which use the non-smooth ReLU activation function $\varrho(x) = \max{0,x}$ by analyzing the approximation theoretic properties of such networks. The existing results mainly fall into two categories: approximation using ReLU networks with a fixed depth, or using ReLU networks whose depth increases with the approximation accuracy. After reviewing these findings, we show that the results concerning networks with fixed depth which up to now only consider approximation in $Lp(\lambda)$ for the Lebesgue measure $\lambda$ can be generalized to approximation in $Lp(\mu)$, for any finite Borel measure $\mu$. In particular, the generalized results apply in the usual setting of statistical learning theory, where one is interested in approximation in $L2(\mathbb{P})$, with the probability measure $\mathbb{P}$ describing the distribution of the data.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.