Emergent Mind

Shape Optimization by Constrained First-Order Least Mean Approximation

(2309.13595)
Published Sep 24, 2023 in math.NA and cs.NA

Abstract

In this work, the problem of shape optimization, subject to PDE constraints, is reformulated as an $Lp$ best approximation problem under divergence constraints to the shape tensor introduced in Laurain and Sturm: ESAIM Math. Model. Numer. Anal. 50 (2016). More precisely, the main result of this paper states that the $Lp$ distance of the above approximation problem is equal to the dual norm of the shape derivative considered as a functional on $W{1,p\ast}$ (where $1/p + 1/p\ast = 1$). This implies that for any given shape, one can evaluate its distance from being a stationary one with respect to the shape derivative by simply solving the associated $Lp$-type least mean approximation problem. Moreover, the Lagrange multiplier for the divergence constraint turns out to be the shape deformation of steepest descent. This provides a way, as an alternative to the approach by Deckelnick, Herbert and Hinze: ESAIM Control Optim. Calc. Var. 28 (2022), for computing shape gradients in $W{1,p\ast}$ for $p\ast \in ( 2 , \infty )$. The discretization of the least mean approximation problem is done with (lowest-order) matrix-valued Raviart-Thomas finite element spaces leading to piecewise constant approximations of the shape deformation acting as Lagrange multiplier. Admissible deformations in $W{1,p\ast}$ to be used in a shape gradient iteration are reconstructed locally. Our computational results confirm that the $Lp$ distance of the best approximation does indeed measure the distance of the considered shape to optimality. Also confirmed by our computational tests are the observations that choosing $p\ast$ (much) larger than 2 (which means that $p$ must be close to 1 in our best approximation problem) decreases the chance of encountering mesh degeneracy during the shape gradient iteration.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.