Emergent Mind

Abstract

In this work, we consider the problem of steering the first two moments of the uncertain state of a discrete time nonlinear stochastic system to prescribed goal quantities at a given final time. In principle, the latter problem can be formulated as a density tracking problem, which seeks for a feedback policy that will keep the probability density function of the state of the system close, in terms of an appropriate metric, to the goal density. The solution to the latter infinite-dimensional problem can be, however, a complex and computationally expensive task. Instead, we propose a more tractable and intuitive approach which relies on a greedy control policy. The latter control policy is comprised of the first elements of the control policies that solve a sequence of corresponding linearized covariance steering problems. Each of these covariance steering problems relies only on information available about the state mean and state covariance at the current stage and can be formulated as a tractable (finite-dimensional) convex program. At each stage, the information on the state statistics is updated by computing approximations of the predicted state mean and covariance of the resulting closed-loop nonlinear system at the next stage by utilizing the (scaled) unscented transform. Numerical simulations that illustrate the key ideas of our approach are also presented.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.