Papers
Topics
Authors
Recent
2000 character limit reached

Simultaneous Neural Network Approximation for Smooth Functions (2109.00161v3)

Published 1 Sep 2021 in math.NA and cs.NA

Abstract: We establish in this work approximation results of deep neural networks for smooth functions measured in Sobolev norms, motivated by recent development of numerical solvers for partial differential equations using deep neural networks. {Our approximation results are nonasymptotic in the sense that the error bounds are explicitly characterized in terms of both the width and depth of the networks simultaneously with all involved constants explicitly determined.} Namely, for $f\in Cs([0,1]d)$, we show that deep ReLU networks of width $\mathcal{O}(N\log{N})$ and of depth $\mathcal{O}(L\log{L})$ can achieve a nonasymptotic approximation rate of $\mathcal{O}(N{-2(s-1)/d}L{-2(s-1)/d})$ with respect to the $\mathcal{W}{1,p}([0,1]d)$ norm for $p\in[1,\infty)$. If either the ReLU function or its square is applied as activation functions to construct deep neural networks of width $\mathcal{O}(N\log{N})$ and of depth $\mathcal{O}(L\log{L})$ to approximate $f\in Cs([0,1]d)$, the approximation rate is $\mathcal{O}(N{-2(s-n)/d}L{-2(s-n)/d})$ with respect to the $\mathcal{W}{n,p}([0,1]d)$ norm for $p\in[1,\infty)$.

Citations (17)

Summary

We haven't generated a summary for this paper yet.

Slide Deck Streamline Icon: https://streamlinehq.com

Whiteboard

Dice Question Streamline Icon: https://streamlinehq.com

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (2)

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.