Emergent Mind

Universal Function Approximation by Deep Neural Nets with Bounded Width and ReLU Activations

(1708.02691)
Published Aug 9, 2017 in stat.ML , cs.CG , cs.LG , math.FA , math.ST , and stat.TH

Abstract

This article concerns the expressive power of depth in neural nets with ReLU activations and bounded width. We are particularly interested in the following questions: what is the minimal width $w{\text{min}}(d)$ so that ReLU nets of width $w{\text{min}}(d)$ (and arbitrary depth) can approximate any continuous function on the unit cube $[0,1]d$ aribitrarily well? For ReLU nets near this minimal width, what can one say about the depth necessary to approximate a given function? Our approach to this paper is based on the observation that, due to the convexity of the ReLU activation, ReLU nets are particularly well-suited for representing convex functions. In particular, we prove that ReLU nets with width $d+1$ can approximate any continuous convex function of $d$ variables arbitrarily well. These results then give quantitative depth estimates for the rate of approximation of any continuous scalar function on the $d$-dimensional cube $[0,1]d$ by ReLU nets with width $d+3.$

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.