Emergent Mind

Benign Nonconvex Landscapes in Optimal and Robust Control, Part I: Global Optimality

(2312.15332)
Published Dec 23, 2023 in math.OC , cs.SY , eess.SY , and math.DS

Abstract

Direct policy search has achieved great empirical success in reinforcement learning. Many recent studies have revisited its theoretical foundation for continuous control, which reveals elegant nonconvex geometry in various benchmark problems, especially in fully observable state-feedback cases. This paper considers two fundamental optimal and robust control problems with partial observability: the Linear Quadratic Gaussian (LQG) control with stochastic noises, and $\mathcal{H}\infty$ robust control with adversarial noises. In the policy space, the former problem is smooth but nonconvex, while the latter one is nonsmooth and nonconvex. We highlight some interesting and surprising ``discontinuity'' of LQG and $\mathcal{H}\infty$ cost functions around the boundary of their domains. Despite the lack of convexity (and possibly smoothness), our main results show that for a class of non-degenerate policies, all Clarke stationary points are globally optimal and there is no spurious local minimum for both LQG and $\mathcal{H}_\infty$ control. Our proof techniques rely on a new and unified framework of Extended Convex Lifting (ECL), which reconciles the gap between nonconvex policy optimization and convex reformulations. This ECL framework is of independent interest, and we will discuss its details in Part II of this paper.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.