Emergent Mind

Abstract

This paper presents a new formulation for model-free robust optimal regulation of continuous-time nonlinear systems. The proposed reinforcement learning based approach, referred to as incremental adaptive dynamic programming (IADP), exploits measured data to allow the design of the approximate optimal incremental control strategy, which stabilizes the controlled system incrementally under model uncertainties, environmental disturbances, and input saturation. By leveraging the time delay estimation (TDE) technique, we first exploit sensory data to reduce the requirement of a complete dynamics, where measured data are adopted to construct an incremental dynamics that reflects the system evolution in an incremental form. Then, the resulting incremental dynamics serves to design the approximate optimal incremental control strategy based on adaptive dynamic programming, which is implemented as a simplified single critic structure to get the approximate solution to the value function of the Hamilton-Jacobi-Bellman equation. Furthermore, for the critic artificial neural network, experience data are used to design an off-policy weight update law with guaranteed weight convergence. Rather importantly, to address the unintentionally introduced TDE error, we incorporate a TDE error bound related term into the cost function, whereby the TDE error is attenuated during the optimization process. The system stability proof and the weight convergence proof are provided. Numerical simulations are conducted to validate the effectiveness and superiority of our proposed IADP, especially regarding the reduced control energy expenditure and the enhanced robustness.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.