Emergent Mind

Stochastic Optimization of Large-Scale Parametrized Dynamical Systems

(2311.08115)
Published Nov 14, 2023 in math.OC , cs.SY , and eess.SY

Abstract

Many relevant problems in the area of systems and control, such as controller synthesis, observer design and model reduction, can be viewed as optimization problems involving dynamical systems: for instance, maximizing performance in the synthesis setting or minimizing error in the reduction setting. When the involved dynamics are large-scale (e.g., high-dimensional semi-discretizations of partial differential equations), the optimization becomes computationally infeasible. Existing methods in literature lack computational scalability or solve an approximation of the problem (thereby losing guarantees with respect to the original problem). In this paper, we propose a novel method that circumvents these issues. The method is an extension of Stochastic Gradient Descent (SGD) which is widely used in the context of large-scale machine learning problems. The proposed SGD scheme minimizes the $\mathcal{H}_2$ norm of a (differentiable) parametrized dynamical system, and we prove that the scheme is guaranteed to preserve stability with high probability under boundedness conditions on the step size. Conditioned on the stability preservation, we also obtain probabilistic convergence guarantees to local minimizers. The method is also applicable to problems involving non-realizable dynamics as it only requires frequency-domain input-output samples. We demonstrate the potential of the approach on two numerical examples: fixed-order observer design for a large-scale thermal model and controller tuning for an infinite-dimensional system.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.