Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

BiERL: A Meta Evolutionary Reinforcement Learning Framework via Bilevel Optimization (2308.01207v1)

Published 1 Aug 2023 in cs.NE, cs.AI, and cs.LG

Abstract: Evolutionary reinforcement learning (ERL) algorithms recently raise attention in tackling complex reinforcement learning (RL) problems due to high parallelism, while they are prone to insufficient exploration or model collapse without carefully tuning hyperparameters (aka meta-parameters). In the paper, we propose a general meta ERL framework via bilevel optimization (BiERL) to jointly update hyperparameters in parallel to training the ERL model within a single agent, which relieves the need for prior domain knowledge or costly optimization procedure before model deployment. We design an elegant meta-level architecture that embeds the inner-level's evolving experience into an informative population representation and introduce a simple and feasible evaluation of the meta-level fitness function to facilitate learning efficiency. We perform extensive experiments in MuJoCo and Box2D tasks to verify that as a general framework, BiERL outperforms various baselines and consistently improves the learning performance for a diversity of ERL algorithms.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Junyi Wang (19 papers)
  2. Yuanyang Zhu (7 papers)
  3. Zhi Wang (261 papers)
  4. Yan Zheng (102 papers)
  5. Jianye Hao (185 papers)
  6. Chunlin Chen (53 papers)

Summary

We haven't generated a summary for this paper yet.