Emergent Mind

Abstract

In this paper, we propose a fast reinforcement learning (RL) control algorithm that enables online control of large-scale networked dynamic systems. RL is an effective way of designing model-free linear quadratic regulator (LQR) controllers for linear time-invariant (LTI) networks with unknown state-space models. However, when the network size is large, conventional RL can result in unacceptably long learning times. The proposed approach is to construct a compressed state vector by projecting the measured state through a projective matrix. This matrix is constructed from online measurements of the states in a way that it captures the dominant controllable subspace of the open-loop network model. Next, a RL-controller is learned using the reduced-dimensional state instead of the original state such that the resultant cost is close to the optimal LQR cost. Numerical benefits as well as the cyber-physical implementation benefits of the approach are verified using illustrative examples including an example of wide-area control of the IEEE 68-bus benchmark power system.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.