Emergent Mind

Abstract

In order to obviate the requirement of drift dynamics in adaptive dynamic programming (ADP), integral reinforcement learning (IRL) has been proposed as an alternate formulation of Bellman equation.However control coupling dynamics is still needed to obtain closed form expression of optimal control effort. In addition to this, initial stabilizing controller and two sets of neural networks (NN) (known as Actor-Critic) are required to implement IRL scheme. In this paper, a stabilizing term in the critic update law is leveraged to avoid the requirement of an initial stabilizing controller in IRL framework to solve optimal tracking problem with actuator constraints. With such a term, only one NN is needed to generate optimal control policies in IRL framework. This critic network is coupled with an experience replay (ER) enhanced identifier to obviate the necessity of control coupling dynamics in IRL algorithm. The weights of both identifier and critic NNs are simultaneously updated and it is shown that the ER-enhanced identifier is able to handle parametric variations better than without ER enhancement. The most salient feature of the novel update law is its variable learning rate, which scales the pace of learning based on instantaneous Hamilton-Jacobi-Bellman (HJB) error. Variable learning rate in critic NN coupled with ER technique in identifier NN help in achieving tighter residual set for state error and error in NN weights as shown in uniform ultimate boundedness (UUB) stability proof. The simulation results validate the presented "identifier-critic" NN on a nonlinear system.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.