Papers
Topics
Authors
Recent
2000 character limit reached

Off Policy Risk Sensitive Reinforcement Learning Based Optimal Tracking Control with Prescribe Performances (2009.00476v1)

Published 31 Aug 2020 in eess.SY and cs.SY

Abstract: An off policy reinforcement learning based control strategy is developed for the optimal tracking control problem to achieve the prescribed performance of full states during the learning process. The optimal tracking control problem is converted as an optimal regulation problem based on an auxiliary system. The requirements of prescribed performances are transformed into constraint satisfaction problems that are dealt with by risk sensitive state penalty terms under an optimization framework. To get approximated solutions of the Hamilton Jacobi Bellman equation, an off policy adaptive critic learning architecture is developed by using current data and experience data together. By using experience data, the proposed weight estimation update law of the critic learning agent guarantees weight convergence to the actual value. This technique enjoys practicability comparing with common methods that need to incorporate external signals to satisfy the persistence of excitation condition for weight convergence. The proofs of stability and weight convergence of the closed loop system are provided. Simulation results reveal the validity of the proposed off policy risk sensitive reinforcement learning based control strategy.

Summary

We haven't generated a summary for this paper yet.

Slide Deck Streamline Icon: https://streamlinehq.com

Whiteboard

Dice Question Streamline Icon: https://streamlinehq.com

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (4)

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.