Emergent Mind

Abstract

The goal of this paper is to address finite-horizon minimum variance and covariance steering problems for discrete-time stochastic (Gaussian) linear systems. On the one hand, the minimum variance problem seeks for a control policy that will steer the state mean of an uncertain system to a prescribed quantity while minimizing the trace of its terminal state covariance (or variance). On the other hand, the covariance steering problem seeks for a control policy that will steer the covariance of the terminal state to a prescribed positive definite matrix. We propose a solution approach that relies on the stochastic version of the affine disturbance feedback control parametrization according to which the control input at each stage can be expressed as an affine function of the history of disturbances that have acted upon the system. Our analysis reveals that this particular parametrization allows one to reduce the stochastic optimal control problems considered herein into tractable convex programs with essentially the same decision variables. This is in contrast with other control policy parametrizations, such as the state feedback parametrization, in which the decision variables of the convex program do not coincide with the controller's parameters of the stochastic optimal control problem. In addition, we propose a variation of the control parametrization which relies on truncated histories of past disturbances. We show that by selecting the length of the truncated sequences appropriately, we can design suboptimal controllers which can strike the desired balance between performance and computational cost.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.