Emergent Mind

Learning-based synthesis of robust linear time-invariant controllers

(2112.03345)
Published Dec 6, 2021 in eess.SY and cs.SY

Abstract

Recent advances in learning for control allow to synthesize vehicle controllers from learned system dynamics and maintain robust stability guarantees. However, no approach is well-suited for training linear time-invariant (LTI) controllers using arbitrary learned models of the dynamics. This article introduces a method to do so. It uses a robust control framework to derive robust stability criteria. It also uses simulated policy rollouts to obtain gradients on the controller parameters, which serve to improve the closed-loop performance. By formulating the stability criteria as penalties with computable gradients, they can be used to guide the controller parameters toward robust stability during gradient descent. The approach is flexible as it does not restrict the type of learned model for the simulated rollouts. The robust control framework ensures that the controller is already robustly stabilizing when first implemented on the actual system and no data is yet collected. It also ensures that the system stays stable in the event of a shift in dynamics, given the system behavior remains within assumed uncertainty bounds. We demonstrate the approach by synthesizing a controller for simulated autonomous lane change maneuvers. This work thus presents a flexible approach to learning robustly stabilizing LTI controllers that take advantage of modern machine learning techniques.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.