Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Model-based reinforcement learning for infinite-horizon approximate optimal tracking (1506.00685v1)

Published 1 Jun 2015 in cs.SY and math.OC

Abstract: This paper provides an approximate online adaptive solution to the infinite-horizon optimal tracking problem for control-affine continuous-time nonlinear systems with unknown drift dynamics. Model-based reinforcement learning is used to relax the persistence of excitation condition. Model-based reinforcement learning is implemented using a concurrent learning-based system identifier to simulate experience by evaluating the BeLLMan error over unexplored areas of the state space. Tracking of the desired trajectory and convergence of the developed policy to a neighborhood of the optimal policy are established via Lyapunov-based stability analysis. Simulation results demonstrate the effectiveness of the developed technique.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Rushikesh Kamalapurkar (54 papers)
  2. Lindsey Andrews (2 papers)
  3. Patrick Walters (7 papers)
  4. Warren E. Dixon (37 papers)
Citations (113)

Summary

We haven't generated a summary for this paper yet.