Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 27 tok/s
Gemini 2.5 Pro 46 tok/s Pro
GPT-5 Medium 23 tok/s Pro
GPT-5 High 29 tok/s Pro
GPT-4o 70 tok/s Pro
Kimi K2 117 tok/s Pro
GPT OSS 120B 459 tok/s Pro
Claude Sonnet 4 34 tok/s Pro
2000 character limit reached

Accelerated replica exchange stochastic gradient Langevin diffusion enhanced Bayesian DeepONet for solving noisy parametric PDEs (2111.02484v1)

Published 3 Nov 2021 in math.NA, cs.LG, and cs.NA

Abstract: The Deep Operator Networks~(DeepONet) is a fundamentally different class of neural networks that we train to approximate nonlinear operators, including the solution operator of parametric partial differential equations (PDE). DeepONets have shown remarkable approximation and generalization capabilities even when trained with relatively small datasets. However, the performance of DeepONets deteriorates when the training data is polluted with noise, a scenario that occurs very often in practice. To enable DeepONets training with noisy data, we propose using the Bayesian framework of replica-exchange Langevin diffusion. Such a framework uses two particles, one for exploring and another for exploiting the loss function landscape of DeepONets. We show that the proposed framework's exploration and exploitation capabilities enable (1) improved training convergence for DeepONets in noisy scenarios and (2) attaching an uncertainty estimate for the predicted solutions of parametric PDEs. In addition, we show that replica-exchange Langeving Diffusion (remarkably) also improves the DeepONet's mean prediction accuracy in noisy scenarios compared with vanilla DeepONets trained with state-of-the-art gradient-based optimization algorithms (e.g. Adam). To reduce the potentially high computational cost of replica, in this work, we propose an accelerated training framework for replica-exchange Langevin diffusion that exploits the neural network architecture of DeepONets to reduce its computational cost up to 25% without compromising the proposed framework's performance. Finally, we illustrate the effectiveness of the proposed Bayesian framework using a series of experiments on four parametric PDE problems.

Citations (26)

Summary

We haven't generated a summary for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-Up Questions

We haven't generated follow-up questions for this paper yet.