Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 49 tok/s
Gemini 2.5 Pro 53 tok/s Pro
GPT-5 Medium 19 tok/s Pro
GPT-5 High 16 tok/s Pro
GPT-4o 103 tok/s Pro
Kimi K2 172 tok/s Pro
GPT OSS 120B 472 tok/s Pro
Claude Sonnet 4 39 tok/s Pro
2000 character limit reached

Model Free Deep Deterministic Policy Gradient Controller for Setpoint Tracking of Non-minimum Phase Systems (2402.17703v1)

Published 27 Feb 2024 in eess.SY and cs.SY

Abstract: Deep Reinforcement Learning (DRL) techniques have received significant attention in control and decision-making algorithms. Most applications involve complex decision-making systems, justified by the algorithms' computational power and cost. While model-based versions are emerging, model-free DRL approaches are intriguing for their independence from models, yet they remain relatively less explored in terms of performance, particularly in applied control. This study conducts a thorough performance analysis comparing the data-driven DRL paradigm with a classical state feedback controller, both designed based on the same cost (reward) function of the linear quadratic regulator (LQR) problem. Twelve additional performance criteria are introduced to assess the controllers' performance, independent of the LQR problem for which they are designed. Two Deep Deterministic Policy Gradient (DDPG)-based controllers are developed, leveraging DDPG's widespread reputation. These controllers are aimed at addressing a challenging setpoint tracking problem in a Non-Minimum Phase (NMP) system. The performance and robustness of the controllers are assessed in the presence of operational challenges, including disturbance, noise, initial conditions, and model uncertainties. The findings suggest that the DDPG controller demonstrates promising behavior under rigorous test conditions. Nevertheless, further improvements are necessary for the DDPG controller to outperform classical methods in all criteria. While DRL algorithms may excel in complex environments owing to the flexibility in the reward function definition, this paper offers practical insights and a comparison framework specifically designed to evaluate these algorithms within the context of control engineering.

Summary

We haven't generated a summary for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Lightbulb On Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.