Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

ENOTO: Improving Offline-to-Online Reinforcement Learning with Q-Ensembles (2306.06871v4)

Published 12 Jun 2023 in cs.LG, cs.AI, and cs.RO

Abstract: Offline reinforcement learning (RL) is a learning paradigm where an agent learns from a fixed dataset of experience. However, learning solely from a static dataset can limit the performance due to the lack of exploration. To overcome it, offline-to-online RL combines offline pre-training with online fine-tuning, which enables the agent to further refine its policy by interacting with the environment in real-time. Despite its benefits, existing offline-to-online RL methods suffer from performance degradation and slow improvement during the online phase. To tackle these challenges, we propose a novel framework called ENsemble-based Offline-To-Online (ENOTO) RL. By increasing the number of Q-networks, we seamlessly bridge offline pre-training and online fine-tuning without degrading performance. Moreover, to expedite online performance enhancement, we appropriately loosen the pessimism of Q-value estimation and incorporate ensemble-based exploration mechanisms into our framework. Experimental results demonstrate that ENOTO can substantially improve the training stability, learning efficiency, and final performance of existing offline RL methods during online fine-tuning on a range of locomotion and navigation tasks, significantly outperforming existing offline-to-online RL methods.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Kai Zhao (160 papers)
  2. Yi Ma (189 papers)
  3. Jianye Hao (185 papers)
  4. Jinyi Liu (18 papers)
  5. Yan Zheng (102 papers)
  6. Zhaopeng Meng (23 papers)
Citations (9)

Summary

We haven't generated a summary for this paper yet.