Fast Online Reinforcement Learning Control using State-Space Dimensionality Reduction (1912.06514v2)
Abstract: In this paper, we propose a fast reinforcement learning (RL) control algorithm that enables online control of large-scale networked dynamic systems. RL is an effective way of designing model-free linear quadratic regulator (LQR) controllers for linear time-invariant (LTI) networks with unknown state-space models. However, when the network size is large, conventional RL can result in unacceptably long learning times. The proposed approach is to construct a compressed state vector by projecting the measured state through a projective matrix. This matrix is constructed from online measurements of the states in a way that it captures the dominant controllable subspace of the open-loop network model. Next, a RL-controller is learned using the reduced-dimensional state instead of the original state such that the resultant cost is close to the optimal LQR cost. Numerical benefits as well as the cyber-physical implementation benefits of the approach are verified using illustrative examples including an example of wide-area control of the IEEE 68-bus benchmark power system.
Collections
Sign up for free to add this paper to one or more collections.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.