Emergent Mind

Data-Driven LQR using Reinforcement Learning and Quadratic Neural Networks

(2311.10235)
Published Nov 16, 2023 in eess.SY and cs.SY

Abstract

This paper introduces a novel data-driven approach to design a linear quadratic regulator (LQR) using a reinforcement learning (RL) algorithm that does not require a system model. The key contribution is to perform policy iteration (PI) by designing the policy evaluator as a two-layer quadratic neural network (QNN). This network is trained through convex optimization. To the best of our knowledge, this is the first time that a QNN trained through convex optimization is employed as the Q-function approximator (QFA). The main advantage is that the QNN's input-output mapping has an analytical expression as a quadratic form, which can then be used to obtain an analytical expression for policy improvement. This is in stark contrast to the available techniques in the literature that must train a second neural network to obtain policy improvement. The article establishes the convergence of the learning algorithm to the optimal control, provided the system is controllable and one starts from a stabilitzing policy. A quadrotor example demonstrates the effectiveness of the proposed approach.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.