Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Safety-Aware Task Composition for Discrete and Continuous Reinforcement Learning (2306.17033v1)

Published 29 Jun 2023 in cs.LG and cs.AI

Abstract: Compositionality is a critical aspect of scalable system design. Reinforcement learning (RL) has recently shown substantial success in task learning, but has only recently begun to truly leverage composition. In this paper, we focus on Boolean composition of learned tasks as opposed to functional or sequential composition. Existing Boolean composition for RL focuses on reaching a satisfying absorbing state in environments with discrete action spaces, but does not support composable safety (i.e., avoidance) constraints. We advance the state of the art in Boolean composition of learned tasks with three contributions: i) introduce two distinct notions of safety in this framework; ii) show how to enforce either safety semantics, prove correctness (under some assumptions), and analyze the trade-offs between the two safety notions; and iii) extend Boolean composition from discrete action spaces to continuous action spaces. We demonstrate these techniques using modified versions of value iteration in a grid world, Deep Q-Network (DQN) in a grid world with image observations, and Twin Delayed DDPG (TD3) in a continuous-observation and continuous-action Bullet physics environment. We believe that these contributions advance the theory of safe reinforcement learning by allowing zero-shot composition of policies satisfying safety properties.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (24)
  1. Compositionality and bounds for optimal value functions in reinforcement learning. arXiv preprint arXiv:2302.09676, 2023.
  2. Safe reinforcement learning via shielding. In AAAI, pages 2669–2678. AAAI Press, 2018.
  3. Control barrier functions: Theory and applications. In 2019 18th European Control Conference (ECC), pages 3420–3431, 2019. doi: 10.23919/ECC.2019.8796030.
  4. Concrete problems in ai safety. arXiv preprint arXiv:1606.06565, 2016.
  5. A brief survey of deep reinforcement learning. arXiv preprint arXiv:1708.05866, 2017.
  6. Model-free reinforcement learning for symbolic automata-encoded objectives. arXiv preprint arXiv:2202.02404, 2022.
  7. Formal Methods for Discrete-Time Dynamical Systems, volume 89. Springer, 01 2017. ISBN 978-3-319-50762-0. doi: 10.1007/978-3-319-50763-7.
  8. Hierarchical potential-based reward shaping from task specifications. arXiv preprint arXiv:2110.02792, 2021.
  9. Safe control with learned certificates: A survey of neural lyapunov, barrier, and contraction methods for robotics and control. IEEE Transactions on Robotics, pages 1–19, 2023. doi: 10.1109/TRO.2022.3232542.
  10. Addressing function approximation error in actor-critic methods. In International conference on machine learning, pages 1587–1596. PMLR, 2018.
  11. S. Gronauer. Bullet-safety-gym: A framework for constrained reinforcement learning. Technical report, mediaTUM, 2022.
  12. How to train your robot with deep reinforcement learning: lessons we have learned. The International Journal of Robotics Research, 40(4-5):698–721, 2021.
  13. H. W. James and E. Collins. An analysis of transient markov decision processes. Journal of applied probability, 43(3):603–621, 2006.
  14. A composable specification language for reinforcement learning tasks. Advances in Neural Information Processing Systems, 32, 2019.
  15. Compositional reinforcement learning from logical specifications. In A. Beygelzimer, Y. Dauphin, P. Liang, and J. W. Vaughan, editors, Advances in Neural Information Processing Systems, 2021.
  16. A formal methods approach to interpretable reinforcement learning for robotic planning. Science Robotics, 4(37):eaay6276, 2019. doi: 10.1126/scirobotics.aay6276.
  17. A boolean task algebra for reinforcement learning. In NeurIPS, 2020.
  18. Generalisation in lifelong reinforcement learning through logical composition. In International Conference on Learning Representations, 2022a.
  19. Skill machines: Temporal logic composition in reinforcement learning. CoRR, abs/2205.12532, 2022b.
  20. Policy invariance under reward transformations: Theory and application to reward shaping. In ICML, pages 278–287. Morgan Kaufmann, 1999.
  21. A. Pnueli. The temporal logic of programs. In 18th Annual Symposium on Foundations of Computer Science (sfcs 1977), pages 46–57, 1977. doi: 10.1109/SFCS.1977.32.
  22. Mastering atari, go, chess and shogi by planning with a learned model. Nature, 588(7839):604–609, 2020.
  23. M. E. Taylor and P. Stone. Transfer learning for reinforcement learning domains: A survey. J. Mach. Learn. Res., 10:1633–1685, dec 2009. ISSN 1532-4435.
  24. Composing value functions in reinforcement learning. In Proceedings of the 36th International Conference on Machine Learning, volume 97 of Proceedings of Machine Learning Research, pages 6401–6409. PMLR, 09–15 Jun 2019.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Kevin Leahy (20 papers)
  2. Makai Mann (13 papers)
  3. Zachary Serlin (14 papers)

Summary

We haven't generated a summary for this paper yet.

Youtube Logo Streamline Icon: https://streamlinehq.com