Emergent Mind

Neural tangent kernel analysis of PINN for advection-diffusion equation

(2211.11716)
Published Nov 21, 2022 in physics.comp-ph and stat.ML

Abstract

Physics-informed neural networks (PINNs) numerically approximate the solution of a partial differential equation (PDE) by incorporating the residual of the PDE along with its initial/boundary conditions into the loss function. In spite of their partial success, PINNs are known to struggle even in simple cases where the closed-form analytical solution is available. In order to better understand the learning mechanism of PINNs, this work focuses on a systematic analysis of PINNs for the linear advection-diffusion equation (LAD) using the Neural Tangent Kernel (NTK) theory. Thanks to the NTK analysis, the effects of the advection speed/diffusion parameter on the training dynamics of PINNs are studied and clarified. We show that the training difficulty of PINNs is a result of 1) the so-called spectral bias, which leads to difficulty in learning high-frequency behaviours; and 2) convergence rate disparity between different loss components that results in training failure. The latter occurs even in the cases where the solution of the underlying PDE does not exhibit high-frequency behaviour. Furthermore, we observe that this training difficulty manifests itself, to some extent, differently in advection-dominated and diffusion-dominated regimes. Different strategies to address these issues are also discussed. In particular, it is demonstrated that periodic activation functions can be used to partly resolve the spectral bias issue.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.