Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

AACC: Asymmetric Actor-Critic in Contextual Reinforcement Learning (2208.02376v1)

Published 3 Aug 2022 in cs.LG and stat.ML

Abstract: Reinforcement Learning (RL) techniques have drawn great attention in many challenging tasks, but their performance deteriorates dramatically when applied to real-world problems. Various methods, such as domain randomization, have been proposed to deal with such situations by training agents under different environmental setups, and therefore they can be generalized to different environments during deployment. However, they usually do not incorporate the underlying environmental factor information that the agents interact with properly and thus can be overly conservative when facing changes in the surroundings. In this paper, we first formalize the task of adapting to changing environmental dynamics in RL as a generalization problem using Contextual Markov Decision Processes (CMDPs). We then propose the Asymmetric Actor-Critic in Contextual RL (AACC) as an end-to-end actor-critic method to deal with such generalization tasks. We demonstrate the essential improvements in the performance of AACC over existing baselines experimentally in a range of simulated environments.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Wangyang Yue (1 paper)
  2. Yuan Zhou (251 papers)
  3. Xiaochuan Zhang (6 papers)
  4. Yuchen Hua (2 papers)
  5. Zhiyuan Wang (102 papers)
  6. Guang Kou (1 paper)
Citations (2)

Summary

We haven't generated a summary for this paper yet.