Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 34 tok/s
Gemini 2.5 Pro 49 tok/s Pro
GPT-5 Medium 27 tok/s Pro
GPT-5 High 30 tok/s Pro
GPT-4o 80 tok/s Pro
Kimi K2 198 tok/s Pro
GPT OSS 120B 461 tok/s Pro
Claude Sonnet 4 38 tok/s Pro
2000 character limit reached

A Stochastic Gradient Method with Mesh Refinement for PDE Constrained Optimization under Uncertainty (1905.08650v3)

Published 21 May 2019 in math.OC, cs.NA, and math.NA

Abstract: Models incorporating uncertain inputs, such as random forces or material parameters, have been of increasing interest in PDE-constrained optimization. In this paper, we focus on the efficient numerical minimization of a convex and smooth tracking-type functional subject to a linear partial differential equation with random coefficients and box constraints. The approach we take is based on stochastic approximation where, in place of a true gradient, a stochastic gradient is chosen using one sample from a known probability distribution. Feasibility is maintained by performing a projection at each iteration. In the application of this method to PDE-constrained optimization under uncertainty, new challenges arise. We observe the discretization error made by approximating the stochastic gradient using finite elements. Analyzing the interplay between PDE discretization and stochastic error, we develop a mesh refinement strategy coupled with decreasing step sizes. Additionally, we develop a mesh refinement strategy for the modified algorithm using iterate averaging and larger step sizes. The effectiveness of the approach is demonstrated numerically for different random field choices.

Citations (33)
List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-Up Questions

We haven't generated follow-up questions for this paper yet.