Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 154 tok/s
Gemini 2.5 Pro 43 tok/s Pro
GPT-5 Medium 23 tok/s Pro
GPT-5 High 32 tok/s Pro
GPT-4o 119 tok/s Pro
Kimi K2 175 tok/s Pro
GPT OSS 120B 362 tok/s Pro
Claude Sonnet 4.5 35 tok/s Pro
2000 character limit reached

Critic-Only Integral Reinforcement Learning Driven by Variable Gain Gradient Descent for Optimal Tracking Control (1911.04153v4)

Published 11 Nov 2019 in eess.SY and cs.SY

Abstract: Integral reinforcement learning (IRL) was proposed in literature to obviate the requirement of drift dynamics in adaptive dynamic programming framework. Most of the online IRL schemes in literature require two sets of neural network (NNs), known as actor-critic NN and an initial stabilizing controller. Recently, for RL-based robust tracking this requirement of initial stabilizing controller and dual-approximator structure could be obviated by using a modified gradient descent-based update law containing a stabilizing term with critic-only structure. To the best of the authors' knowledge, there has been no study on leveraging such stabilizing term in IRL algorithm framework to solve optimal trajectory tracking problems for continuous time nonlinear systems with actuator constraints. To this end a novel update law leveraging the stabilizing term along with variable gain gradient descent in IRL framework is presented in this paper. With these modifications, the IRL tracking controller can be implemented using only critic NN, while no initial stabilizing controller is required. Another salient feature of the presented update law is its variable learning rate, which scales the pace of learning based on instantaneous Hamilton-Jacobi-BeLLMan error and rate of variation of Lyapunov function along the system trajectories. The augmented system states and NN weight errors are shown to possess uniform ultimate boundedness (UUB) stability under the presented update law and achieve a tighter residual set. This update law is validated on a full 6-DoF nonlinear model of UAV for attitude control.

Citations (3)

Summary

We haven't generated a summary for this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube