Emergent Mind

State-constrained Optimization Problems under Uncertainty: A Tensor Train Approach

(2301.08684)
Published Jan 20, 2023 in math.OC , cs.NA , and math.NA

Abstract

We propose an algorithm to solve optimization problems constrained by partial (ordinary) differential equations under uncertainty, with almost sure constraints on the state variable. To alleviate the computational burden of high-dimensional random variables, we approximate all random fields by the tensor-train decomposition. To enable efficient tensor-train approximation of the state constraints, the latter are handled using the Moreau-Yosida penalty, with an additional smoothing of the positive part (plus/ReLU) function by a softplus function. We derive theoretical bounds on the constraint violation in terms of the Moreau-Yosida regularization parameter and smoothing width of the softplus function. This result also proposes a practical recipe for selecting these two parameters. When the optimization problem is strongly convex, we establish strong convergence of the regularized solution to the optimal control. We develop a second order Newton type method with a fast matrix-free action of the approximate Hessian to solve the smoothed Moreau-Yosida problem. This algorithm is tested on benchmark elliptic problems with random coefficients, optimization problems constrained by random elliptic variational inequalities, and a real-world epidemiological model with 20 random variables. These examples demonstrate mild (at most polynomial) scaling with respect to the dimension and regularization parameters.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.