Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Deep Learning Based on Randomized Quasi-Monte Carlo Method for Solving Linear Kolmogorov Partial Differential Equation (2310.18100v2)

Published 27 Oct 2023 in math.NA and cs.NA

Abstract: Deep learning algorithms have been widely used to solve linear Kolmogorov partial differential equations~(PDEs) in high dimensions, where the loss function is defined as a mathematical expectation. We propose to use the randomized quasi-Monte Carlo (RQMC) method instead of the Monte Carlo (MC) method for computing the loss function. In theory, we decompose the error from empirical risk minimization~(ERM) into the generalization error and the approximation error. Notably, the approximation error is independent of the sampling methods. We prove that the convergence order of the mean generalization error for the RQMC method is $O(n{-1+\epsilon})$ for arbitrarily small $\epsilon>0$, while for the MC method it is $O(n{-1/2+\epsilon})$ for arbitrarily small $\epsilon>0$. Consequently, we find that the overall error for the RQMC method is asymptotically smaller than that for the MC method as $n$ increases. Our numerical experiments show that the algorithm based on the RQMC method consistently achieves smaller relative $L{2}$ error than that based on the MC method.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Jichang Xiao (3 papers)
  2. Fengjiang Fu (2 papers)
  3. Xiaoqun Wang (94 papers)

Summary

We haven't generated a summary for this paper yet.