Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

BRPO: Batch Residual Policy Optimization (2002.05522v2)

Published 8 Feb 2020 in cs.LG, cs.AI, and stat.ML

Abstract: In batch reinforcement learning (RL), one often constrains a learned policy to be close to the behavior (data-generating) policy, e.g., by constraining the learned action distribution to differ from the behavior policy by some maximum degree that is the same at each state. This can cause batch RL to be overly conservative, unable to exploit large policy changes at frequently-visited, high-confidence states without risking poor performance at sparsely-visited states. To remedy this, we propose residual policies, where the allowable deviation of the learned policy is state-action-dependent. We derive a new for RL method, BRPO, which learns both the policy and allowable deviation that jointly maximize a lower bound on policy performance. We show that BRPO achieves the state-of-the-art performance in a number of tasks.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Sungryull Sohn (21 papers)
  2. Yinlam Chow (46 papers)
  3. Jayden Ooi (6 papers)
  4. Ofir Nachum (64 papers)
  5. Honglak Lee (174 papers)
  6. Ed Chi (24 papers)
  7. Craig Boutilier (78 papers)
Citations (46)

Summary

We haven't generated a summary for this paper yet.