Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 137 tok/s
Gemini 2.5 Pro 51 tok/s Pro
GPT-5 Medium 30 tok/s Pro
GPT-5 High 27 tok/s Pro
GPT-4o 119 tok/s Pro
Kimi K2 197 tok/s Pro
GPT OSS 120B 425 tok/s Pro
Claude Sonnet 4.5 34 tok/s Pro
2000 character limit reached

Convergence and error control of consistent PINNs for elliptic PDEs (2406.09217v2)

Published 13 Jun 2024 in math.NA and cs.NA

Abstract: We provide an a priori analysis of collocation methods for solving elliptic boundary value problems. They begin with information in the form of point values of the data and utilize only this information to numerically approximate the solution u of the PDE. For such a method to provide an approximation with guaranteed error bounds, additional assumptions on the data, called model class assumptions, are needed. We determine the best error of approximating u in the energy norm, in terms of the total number of point samples, under all Besov class model assumptions for the right hand side and boundary data. We then turn to the study of numerical procedures and analyze whether a proposed numerical procedure achieves the optimal recovery error. We analyze numerical methods which generate the numerical approximation to $u$ by minimizing specified data driven loss functions over a set $\Sigma$ which is either a finite dimensional linear space, or more generally, a finite dimensional manifold. We show that the success of such a procedure depends critically on choosing a data driven loss function that is consistent with the PDE and provides sharp error control. Based on this analysis a new loss function is proposed. We also address the recent methods of Physics Informed Neural Networks. We prove that minimization of the new loss over restricted neural network spaces $\Sigma$ provides an optimal recovery of the solution $u$, provided that the optimization problem can be numerically executed and $\Sigma$ has sufficient approximation capabilities. We also analyze variants of the new loss function which are more practical for implementation. Finally, numerical examples illustrating the benefits of the proposed loss functions are given.

Citations (2)

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.