Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
125 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Learning General Parameterized Policies for Infinite Horizon Average Reward Constrained MDPs via Primal-Dual Policy Gradient Algorithm (2402.02042v3)

Published 3 Feb 2024 in cs.LG and cs.AI

Abstract: This paper explores the realm of infinite horizon average reward Constrained Markov Decision Processes (CMDPs). To the best of our knowledge, this work is the first to delve into the regret and constraint violation analysis of average reward CMDPs with a general policy parametrization. To address this challenge, we propose a primal dual-based policy gradient algorithm that adeptly manages the constraints while ensuring a low regret guarantee toward achieving a global optimal policy. In particular, our proposed algorithm achieves $\tilde{\mathcal{O}}({T}{4/5})$ objective regret and $\tilde{\mathcal{O}}({T}{4/5})$ constraint violation bounds.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Qinbo Bai (14 papers)
  2. Washim Uddin Mondal (23 papers)
  3. Vaneet Aggarwal (222 papers)
Citations (1)

Summary

We haven't generated a summary for this paper yet.