Papers
Topics
Authors
Recent
2000 character limit reached

Error analysis for empirical risk minimization over clipped ReLU networks in solving linear Kolmogorov partial differential equations (2310.12582v2)

Published 19 Oct 2023 in math.NA and cs.NA

Abstract: Deep learning algorithms have been successfully applied to numerically solve linear Kolmogorov partial differential equations~(PDEs). A recent research shows that if the initial functions are bounded, the empirical risk minimization (ERM) over clipped ReLU networks generalizes well for solving the linear Kolmogorov PDE. In this paper, we propose to use a truncation technique to extend the generalization results for polynomially growing initial functions. Specifically, we prove that under an assumption, the sample size required to achieve an generalization error within $\varepsilon$ with a confidence level $\varrho$ grows polynomially in the size of the clipped neural networks and $(\varepsilon{-1},\varrho{-1})$, which means that the curse of dimensionality is broken. Moreover, we verify that the required assumptions hold for Black-Scholes PDEs and heat equations which are two important cases of linear Kolmogorov PDEs. For the approximation error, under certain assumptions, we establish approximation results for clipped ReLU neural networks when approximating the solution of Kolmogorov PDEs. Consequently, we establish that the ERM over artificial neural networks indeed overcomes the curse of dimensionality for a larger class of linear Kolmogorov PDEs.

Summary

We haven't generated a summary for this paper yet.

Slide Deck Streamline Icon: https://streamlinehq.com

Whiteboard

Dice Question Streamline Icon: https://streamlinehq.com

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.