Emergent Mind

Abstract

Integral reinforcement learning (IRL) was proposed in literature to obviate the requirement of drift dynamics in adaptive dynamic programming framework. Most of the online IRL schemes in literature require two sets of neural network (NNs), known as actor-critic NN and an initial stabilizing controller. Recently, for RL-based robust tracking this requirement of initial stabilizing controller and dual-approximator structure could be obviated by using a modified gradient descent-based update law containing a stabilizing term with critic-only structure. To the best of the authors' knowledge, there has been no study on leveraging such stabilizing term in IRL algorithm framework to solve optimal trajectory tracking problems for continuous time nonlinear systems with actuator constraints. To this end a novel update law leveraging the stabilizing term along with variable gain gradient descent in IRL framework is presented in this paper. With these modifications, the IRL tracking controller can be implemented using only critic NN, while no initial stabilizing controller is required. Another salient feature of the presented update law is its variable learning rate, which scales the pace of learning based on instantaneous Hamilton-Jacobi-Bellman error and rate of variation of Lyapunov function along the system trajectories. The augmented system states and NN weight errors are shown to possess uniform ultimate boundedness (UUB) stability under the presented update law and achieve a tighter residual set. This update law is validated on a full 6-DoF nonlinear model of UAV for attitude control.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.