Emergent Mind

Abstract

We present an augmented Lagrangian trust-region method to efficiently solve constrained optimization problems governed by large-scale nonlinear systems with application to partial differential equation-constrained optimization. At each major augmented Lagrangian iteration, the expensive optimization subproblem involving the full nonlinear system is replaced by an empirical quadrature-based hyperreduced model constructed on-the-fly. To ensure convergence of these inexact augmented Lagrangian subproblems, we develop a bound-constrained trust-region method that allows for inexact gradient evaluations, and specialize it to our specific setting that leverages hyperreduced models. This approach circumvents a traditional training phase because the models are built on-the-fly in accordance with the requirements of the trust-region convergence theory. Two numerical experiments (constrained aerodynamic shape design) demonstrate the convergence and efficiency of the proposed work. A speedup of 12.7x (for all computational costs, even costs traditionally considered "offline" such as snapshot collection and data compression) relative to a standard optimization approach that does not leverage model reduction is shown.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.