Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 27 tok/s
Gemini 2.5 Pro 46 tok/s Pro
GPT-5 Medium 23 tok/s Pro
GPT-5 High 29 tok/s Pro
GPT-4o 70 tok/s Pro
Kimi K2 117 tok/s Pro
GPT OSS 120B 459 tok/s Pro
Claude Sonnet 4 34 tok/s Pro
2000 character limit reached

Online Adaptive Optimal Control Algorithm Based on Synchronous Integral Reinforcement Learning With Explorations (2105.09006v1)

Published 19 May 2021 in eess.SY and cs.SY

Abstract: In this paper, we present a novel algorithm named synchronous integral Q-learning, which is based on synchronous policy iteration, to solve the continuous-time infinite horizon optimal control problems of input-affine system dynamics. The integral reinforcement is measured as an excitation signal in this method to estimate the solution to the Hamilton-Jacobi-BeLLMan equation. Moreover, the proposed method is completely model-free, i.e. no a priori knowledge of the system is required. Using policy iteration, the actor and critic neural networks can simultaneously approximate the optimal value function and policy. The persistence of excitation condition is required to guarantee the convergence of the two networks. Unlike in traditional policy iteration algorithms, the restriction of the initial admissible policy is relaxed in this method. The effectiveness of the proposed algorithm is verified through numerical simulations.

Citations (11)

Summary

We haven't generated a summary for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-Up Questions

We haven't generated follow-up questions for this paper yet.

Authors (2)