2000 character limit reached
Model-based reinforcement learning for infinite-horizon approximate optimal tracking (1506.00685v1)
Published 1 Jun 2015 in cs.SY and math.OC
Abstract: This paper provides an approximate online adaptive solution to the infinite-horizon optimal tracking problem for control-affine continuous-time nonlinear systems with unknown drift dynamics. Model-based reinforcement learning is used to relax the persistence of excitation condition. Model-based reinforcement learning is implemented using a concurrent learning-based system identifier to simulate experience by evaluating the BeLLMan error over unexplored areas of the state space. Tracking of the desired trajectory and convergence of the developed policy to a neighborhood of the optimal policy are established via Lyapunov-based stability analysis. Simulation results demonstrate the effectiveness of the developed technique.
- Rushikesh Kamalapurkar (54 papers)
- Lindsey Andrews (2 papers)
- Patrick Walters (7 papers)
- Warren E. Dixon (37 papers)