Emergent Mind

Optimal approximation of piecewise smooth functions using deep ReLU neural networks

(1709.05289)
Published Sep 15, 2017 in math.FA , cs.LG , and stat.ML

Abstract

We study the necessary and sufficient complexity of ReLU neural networksin terms of depth and number of weightswhich is required for approximating classifier functions in $L2$. As a model class, we consider the set $\mathcal{E}\beta (\mathbb Rd)$ of possibly discontinuous piecewise $C\beta$ functions $f : [-1/2, 1/2]d \to \mathbb R$, where the different smooth regions of $f$ are separated by $C\beta$ hypersurfaces. For dimension $d \geq 2$, regularity $\beta > 0$, and accuracy $\varepsilon > 0$, we construct artificial neural networks with ReLU activation function that approximate functions from $\mathcal{E}\beta(\mathbb Rd)$ up to $L2$ error of $\varepsilon$. The constructed networks have a fixed number of layers, depending only on $d$ and $\beta$, and they have $O(\varepsilon{-2(d-1)/\beta})$ many nonzero weights, which we prove to be optimal. In addition to the optimality in terms of the number of weights, we show that in order to achieve the optimal approximation rate, one needs ReLU networks of a certain depth. Precisely, for piecewise $C\beta(\mathbb Rd)$ functions, this minimal depth is givenup to a multiplicative constantby $\beta/d$. Up to a log factor, our constructed networks match this bound. This partly explains the benefits of depth for ReLU networks by showing that deep networks are necessary to achieve efficient approximation of (piecewise) smooth functions. Finally, we analyze approximation in high-dimensional spaces where the function $f$ to be approximated can be factorized into a smooth dimension reducing feature map $\tau$ and classifier function $g$defined on a low-dimensional feature spaceas $f = g \circ \tau$. We show that in this case the approximation rate depends only on the dimension of the feature space and not the input dimension.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.