Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Zeroth-Order Actor-Critic (2201.12518v3)

Published 29 Jan 2022 in cs.LG, cs.AI, cs.SY, and eess.SY

Abstract: The recent advanced evolution-based zeroth-order optimization methods and the policy gradient-based first-order methods are two promising alternatives to solve reinforcement learning (RL) problems with complementary advantages. The former methods work with arbitrary policies, drive state-dependent and temporally-extended exploration, possess robustness-seeking property, but suffer from high sample complexity, while the latter methods are more sample efficient but are restricted to differentiable policies and the learned policies are less robust. To address these issues, we propose a novel Zeroth-Order Actor-Critic algorithm (ZOAC), which unifies these two methods into an on-policy actor-critic architecture to preserve the advantages from both. ZOAC conducts rollouts collection with timestep-wise perturbation in parameter space, first-order policy evaluation (PEV) and zeroth-order policy improvement (PIM) alternately in each iteration. We extensively evaluate our proposed method on a wide range of challenging continuous control benchmarks using different types of policies, where ZOAC outperforms zeroth-order and first-order baseline algorithms.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Yuheng Lei (5 papers)
  2. Jianyu Chen (69 papers)
  3. Shengbo Eben Li (98 papers)
  4. Sifa Zheng (17 papers)
Citations (3)

Summary

We haven't generated a summary for this paper yet.