Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
157 tokens/sec
GPT-4o
43 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A Globally Convergent Policy Gradient Method for Linear Quadratic Gaussian (LQG) Control (2312.12173v3)

Published 19 Dec 2023 in math.OC, cs.SY, and eess.SY

Abstract: We present a model-based globally convergent policy gradient method (PGM) for linear quadratic Gaussian (LQG) control. Firstly, we establish equivalence between optimizing dynamic output feedback controllers and designing a static feedback gain for a system represented by a finite-length input-output history (IOH). This IOH-based approach allows us to explore LQG controllers within a parameter space defined by IOH gains. Secondly, by considering a control law comprising the IOH gain and a sufficiently small random perturbation, we show that the cost function, evaluated through the control law over IOH gains, is gradient-dominant and locally smooth, ensuring the global linear convergence of the PGM. Numerical simulations show that the dynamic controller learned by the proposed PGM is almost same as the LQG optimal controller, indicating promising results even in a reduced-order controller design.

Summary

We haven't generated a summary for this paper yet.