Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Learning to Constrain Policy Optimization with Virtual Trust Region (2204.09315v2)

Published 20 Apr 2022 in cs.LG

Abstract: We introduce a constrained optimization method for policy gradient reinforcement learning, which uses a virtual trust region to regulate each policy update. In addition to using the proximity of one single old policy as the normal trust region, we propose forming a second trust region through another virtual policy representing a wide range of past policies. We then enforce the new policy to stay closer to the virtual policy, which is beneficial if the old policy performs poorly. More importantly, we propose a mechanism to automatically build the virtual policy from a memory of past policies, providing a new capability for dynamically learning appropriate virtual trust regions during the optimization process. Our proposed method, dubbed Memory-Constrained Policy Optimization (MCPO), is examined in diverse environments, including robotic locomotion control, navigation with sparse rewards and Atari games, consistently demonstrating competitive performance against recent on-policy constrained policy gradient methods.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Hung Le (120 papers)
  2. Thommen Karimpanal George (6 papers)
  3. Majid Abdolshah (10 papers)
  4. Dung Nguyen (40 papers)
  5. Kien Do (35 papers)
  6. Sunil Gupta (78 papers)
  7. Svetha Venkatesh (160 papers)
Citations (3)

Summary

We haven't generated a summary for this paper yet.