Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 45 tok/s
Gemini 2.5 Pro 52 tok/s Pro
GPT-5 Medium 30 tok/s Pro
GPT-5 High 24 tok/s Pro
GPT-4o 96 tok/s Pro
Kimi K2 206 tok/s Pro
GPT OSS 120B 457 tok/s Pro
Claude Sonnet 4 39 tok/s Pro
2000 character limit reached

Inverse Optimal Control with Discount Factor for Continuous and Discrete-Time Control-Affine Systems and Reinforcement Learning (2211.09917v1)

Published 17 Nov 2022 in math.OC, cs.SY, and eess.SY

Abstract: This paper addresses the inverse optimal control problem of finding the state weighting function that leads to a quadratic value function when the cost on the input is fixed to be quadratic. The paper focuses on a class of infinite horizon discrete-time and continuous-time optimal control problems whose dynamics are control-affine and whose cost is quadratic in the input. The optimal control policy for this problem is the projection of minus the gradient of the value function onto the space formed by all feasible control directions. This projection points along the control direction of steepest decrease of the value function. For discrete-time systems and a quadratic value function the optimal control law can be obtained as the solution of a regularized least squares program, which corresponds to a receding horizon control with a single step ahead. For the single input case and a quadratic value function the solution for small weights in the control energy is interpreted as a control policy that at each step brings the trajectories of the system as close as possible to the origin, as measured by an appropriate norm. Conditions under which the optimal control law is linear are also stated. Additionally, the paper offers a mapping of the optimal control formulation to an equivalent reinforcement learning formulation. Examples show the application of the theoretical results.

Citations (4)
List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-Up Questions

We haven't generated follow-up questions for this paper yet.

Authors (1)