Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Actor-Critic Reinforcement Learning with Phased Actor (2404.11834v1)

Published 18 Apr 2024 in cs.LG

Abstract: Policy gradient methods in actor-critic reinforcement learning (RL) have become perhaps the most promising approaches to solving continuous optimal control problems. However, the trial-and-error nature of RL and the inherent randomness associated with solution approximations cause variations in the learned optimal values and policies. This has significantly hindered their successful deployment in real life applications where control responses need to meet dynamic performance criteria deterministically. Here we propose a novel phased actor in actor-critic (PAAC) method, aiming at improving policy gradient estimation and thus the quality of the control policy. Specifically, PAAC accounts for both $Q$ value and TD error in its actor update. We prove qualitative properties of PAAC for learning convergence of the value and policy, solution optimality, and stability of system dynamics. Additionally, we show variance reduction in policy gradient estimation. PAAC performance is systematically and quantitatively evaluated in this study using DeepMind Control Suite (DMC). Results show that PAAC leads to significant performance improvement measured by total cost, learning variance, robustness, learning speed and success rate. As PAAC can be piggybacked onto general policy gradient learning frameworks, we select well-known methods such as direct heuristic dynamic programming (dHDP), deep deterministic policy gradient (DDPG) and their variants to demonstrate the effectiveness of PAAC. Consequently we provide a unified view on these related policy gradient algorithms.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (44)
  1. T. P. Lillicrap, J. J. Hunt, A. Pritzel, N. Heess, T. Erez, Y. Tassa, D. Silver, and D. Wierstra, “Continuous control with deep reinforcement learning,” arXiv preprint arXiv:1509.02971, 2015.
  2. T. Haarnoja, A. Zhou, P. Abbeel, and S. Levine, “Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor,” in International conference on machine learning.   PMLR, 2018, pp. 1861–1870.
  3. S. Fujimoto, H. Hoof, and D. Meger, “Addressing function approximation error in actor-critic methods,” in International conference on machine learning.   PMLR, 2018, pp. 1587–1596.
  4. S. Bhatnagar, R. S. Sutton, M. Ghavamzadeh, and M. Lee, “Natural actor–critic algorithms,” Automatica, vol. 45, no. 11, pp. 2471–2482, 2009.
  5. J. Bjorck, C. P. Gomes, and K. Q. Weinberger, “Is High Variance Unavoidable in RL? A Case Study in Continuous Control,” arXiv preprint arXiv:2110.11222, 2021.
  6. G. Dulac-Arnold, N. Levine, D. J. Mankowitz, J. Li, C. Paduraru, S. Gowal, and T. Hester, “Challenges of real-world reinforcement learning: definitions, benchmarks and analysis,” Machine Learning, vol. 110, no. 9, pp. 2419–2468, 2021.
  7. R. Johnson and T. Zhang, “Accelerating stochastic gradient descent using predictive variance reduction,” Advances in neural information processing systems, vol. 26, 2013.
  8. P. Henderson, R. Islam, P. Bachman, J. Pineau, D. Precup, and D. Meger, “Deep reinforcement learning that matters,” in Proceedings of the AAAI conference on artificial intelligence, vol. 32, no. 1, 2018.
  9. Y. Duan, X. Chen, R. Houthooft, J. Schulman, and P. Abbeel, “Benchmarking deep reinforcement learning for continuous control,” in International conference on machine learning.   PMLR, 2016, pp. 1329–1338.
  10. D. Silver, G. Lever, N. Heess, T. Degris, D. Wierstra, and M. Riedmiller, “Deterministic policy gradient algorithms,” in International conference on machine learning.   Pmlr, 2014, pp. 387–395.
  11. V. Mnih, K. Kavukcuoglu, D. Silver, A. Graves, I. Antonoglou, D. Wierstra, and M. Riedmiller, “Playing atari with deep reinforcement learning,” arXiv preprint arXiv:1312.5602, 2013.
  12. V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves, M. Riedmiller, A. K. Fidjeland, G. Ostrovski et al., “Human-level control through deep reinforcement learning,” nature, vol. 518, no. 7540, pp. 529–533, 2015.
  13. K. De Asis, J. Hernandez-Garcia, G. Holland, and R. Sutton, “Multi-step reinforcement learning: A unifying algorithm,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 32, no. 1, 2018.
  14. D. P. Bertsekas, “Rollout algorithms for discrete optimization: A survey,” Handbook of Combinatorial Optimization, D. Zu and P. Pardalos, Eds. Springer, 2010.
  15. G. Tesauro, “TD-Gammon, a self-teaching backgammon program, achieves master-level play,” Neural computation, vol. 6, no. 2, pp. 215–219, 1994.
  16. D. Silver, T. Hubert, J. Schrittwieser, I. Antonoglou, M. Lai, A. Guez, M. Lanctot, L. Sifre, D. Kumaran, T. Graepel et al., “A general reinforcement learning algorithm that masters chess, shogi, and go through self-play,” Science, vol. 362, no. 6419, pp. 1140–1144, 2018.
  17. D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. Van Den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot et al., “Mastering the game of go with deep neural networks and tree search,” nature, vol. 529, no. 7587, pp. 484–489, 2016.
  18. M. Hessel, J. Modayil, H. Van Hasselt, T. Schaul, G. Ostrovski, W. Dabney, D. Horgan, B. Piot, M. Azar, and D. Silver, “Rainbow: Combining improvements in deep reinforcement learning,” in Proceedings of the AAAI conference on artificial intelligence, vol. 32, no. 1, 2018.
  19. J. Zhong, R. Wu, and J. Si, “A long n𝑛nitalic_n-step surrogate stage reward for deep reinforcement learning,” Advances in Neural Information Processing Systems, vol. 36, 2024.
  20. G. Barth-Maron, M. W. Hoffman, D. Budden, W. Dabney, D. Horgan, D. Tb, A. Muldal, N. Heess, and T. Lillicrap, “Distributed distributional deterministic policy gradients,” arXiv preprint arXiv:1804.08617, 2018.
  21. H. Hasselt, “Double q-learning,” Advances in neural information processing systems, vol. 23, 2010.
  22. K. Arulkumaran, M. P. Deisenroth, M. Brundage, and A. A. Bharath, “Deep reinforcement learning: A brief survey,” IEEE Signal Processing Magazine, vol. 34, no. 6, pp. 26–38, 2017.
  23. E. Greensmith, P. L. Bartlett, and J. Baxter, “Variance reduction techniques for gradient estimates in reinforcement learning.” Journal of Machine Learning Research, vol. 5, no. 9, 2004.
  24. R. S. Sutton, D. McAllester, S. Singh, and Y. Mansour, “Policy gradient methods for reinforcement learning with function approximation,” Advances in neural information processing systems, vol. 12, 1999.
  25. V. Mnih, A. P. Badia, M. Mirza, A. Graves, T. Lillicrap, T. Harley, D. Silver, and K. Kavukcuoglu, “Asynchronous methods for deep reinforcement learning,” in International conference on machine learning.   PMLR, 2016, pp. 1928–1937.
  26. J. Schulman, P. Moritz, S. Levine, M. Jordan, and P. Abbeel, “High-dimensional continuous control using generalized advantage estimation,” arXiv preprint arXiv:1506.02438, 2015.
  27. F. Sehnke, C. Osendorfer, T. Rückstieß, A. Graves, J. Peters, and J. Schmidhuber, “Parameter-exploring policy gradients,” Neural Networks, vol. 23, no. 4, pp. 551–559, 2010.
  28. T. Zhao, H. Hachiya, G. Niu, and M. Sugiyama, “Analysis and improvement of policy gradient estimation,” Advances in Neural Information Processing Systems, vol. 24, 2011.
  29. C. Wu, A. Rajeswaran, Y. Duan, V. Kumar, A. M. Bayen, S. Kakade, I. Mordatch, and P. Abbeel, “Variance reduction for policy gradient with action-dependent factorized baselines,” in International Conference on Learning Representations.
  30. L. C. Baird, “Advantage updating,” Technical report wl-tr-93-1146, Wright Patterson AFB OH, Tech. Rep., 1993.
  31. S. M. Kakade, “A natural policy gradient,” Advances in neural information processing systems, vol. 14, 2001.
  32. T. Degris, M. White, and R. S. Sutton, “Off-policy actor-critic,” in International Conference on Machine Learning, 2012.
  33. J. Peters and S. Schaal, “Natural actor-critic,” Neurocomputing, vol. 71, no. 7-9, pp. 1180–1190, 2008.
  34. J. Peters, S. Vijayakumar, and S. Schaal, “Reinforcement learning for humanoid robotics,” in Proceedings of the third IEEE-RAS international conference on humanoid robots, 2003, pp. 1–20.
  35. S. Bhatnagar, M. Ghavamzadeh, M. Lee, and R. S. Sutton, “Incremental natural actor-critic algorithms,” Advances in neural information processing systems, vol. 20, 2007.
  36. R. J. Williams, “Simple statistical gradient-following algorithms for connectionist reinforcement learning,” Reinforcement learning, pp. 5–32, 1992.
  37. H. Kimura, S. Kobayashi et al., “An analysis of actor/critic algorithms using eligibility traces: Reinforcement learning with imperfect value function.” in ICML, vol. 98, 1998.
  38. J. Si and Y.-T. Wang, “Online learning control by association and reinforcement,” IEEE Transactions on Neural networks, vol. 12, no. 2, pp. 264–276, 2001.
  39. R. Enns and J. Si, “Apache helicopter stabilization using neural dynamic programming,” Journal of guidance, control, and dynamics, vol. 25, no. 1, pp. 19–25, 2002.
  40. ——, “Helicopter flight-control reconfiguration for main rotor actuator failures,” Journal of guidance, control, and dynamics, vol. 26, no. 4, pp. 572–584, 2003.
  41. R. Hafner and M. Riedmiller, “Reinforcement learning in feedback control,” Machine learning, vol. 84, no. 1, pp. 137–169, 2011.
  42. X. Gao, J. Si, and H. Huang, “Reinforcement learning control with knowledge shaping,” IEEE Transactions on Neural Networks and Learning Systems, 2023.
  43. T. Schaul, J. Quan, I. Antonoglou, and D. Silver, “Prioritized experience replay,” arXiv preprint arXiv:1511.05952, 2015.
  44. A. G. Barto, R. S. Sutton, and C. W. Anderson, “Neuronlike adaptive elements that can solve difficult learning control problems,” IEEE transactions on systems, man, and cybernetics, no. 5, pp. 834–846, 1983.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Ruofan Wu (33 papers)
  2. Junmin Zhong (3 papers)
  3. Jennie Si (12 papers)

Summary

We haven't generated a summary for this paper yet.