Emergent Mind

Approximation of Smoothness Classes by Deep Rectifier Networks

(2007.15645)
Published Jul 30, 2020 in math.FA , cs.LG , cs.NA , and math.NA

Abstract

We consider approximation rates of sparsely connected deep rectified linear unit (ReLU) and rectified power unit (RePU) neural networks for functions in Besov spaces $B\alpha_{q}(Lp)$ in arbitrary dimension $d$, on general domains. We show that \alert{deep rectifier} networks with a fixed activation function attain optimal or near to optimal approximation rates for functions in the Besov space $B\alpha_{\tau}(L\tau)$ on the critical embedding line $1/\tau=\alpha/d+1/p$ for \emph{arbitrary} smoothness order $\alpha>0$. Using interpolation theory, this implies that the entire range of smoothness classes at or above the critical line is (near to) optimally approximated by deep ReLU/RePU networks.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.