Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 41 tok/s
Gemini 2.5 Pro 46 tok/s Pro
GPT-5 Medium 21 tok/s Pro
GPT-5 High 20 tok/s Pro
GPT-4o 91 tok/s Pro
Kimi K2 178 tok/s Pro
GPT OSS 120B 474 tok/s Pro
Claude Sonnet 4 38 tok/s Pro
2000 character limit reached

Topology Optimization under Uncertainty using a Stochastic Gradient-based Approach (1902.04562v2)

Published 11 Feb 2019 in math.OC and cs.NA

Abstract: Topology optimization under uncertainty (TOuU) often defines objectives and constraints by statistical moments of geometric and physical quantities of interest. Most traditional TOuU methods use gradient-based optimization algorithms and rely on accurate estimates of the statistical moments and their gradients, e.g., via adjoint calculations. When the number of uncertain inputs is large or the quantities of interest exhibit large variability, a large number of adjoint (and/or forward) solves may be required to ensure the accuracy of these gradients. The optimization procedure itself often requires a large number of iterations, which may render TOuU computationally expensive, if not infeasible. To tackle this difficulty, we here propose an optimization approach that generates a stochastic approximation of the objective, constraints, and their gradients via a small number of adjoint (and/or forward) solves, per iteration. A statistically independent (stochastic) approximation of these quantities is generated at each optimization iteration. The total cost of this approach is only a small factor larger than that of the corresponding deterministic TO problem. We incorporate the stochastic approximation of objective, constraints and their design sensitivities into two classes of optimization algorithms. First, we investigate the stochastic gradient descent (SGD) method and a number of its variants, which have been successfully applied to large-scale optimization problems for machine learning. Second, we study the use of the proposed stochastic approximation approach within conventional nonlinear programming methods, focusing on the Globally Convergent Method of Moving Asymptotes (GCMMA). The performance of these algorithms is investigated with structural design optimization problems utilizing a Solid Isotropic Material with Penalization (SIMP), as well as an explicit level set method.

Citations (43)
List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-Up Questions

We haven't generated follow-up questions for this paper yet.