Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 160 tok/s
Gemini 2.5 Pro 50 tok/s Pro
GPT-5 Medium 33 tok/s Pro
GPT-5 High 41 tok/s Pro
GPT-4o 95 tok/s Pro
Kimi K2 193 tok/s Pro
GPT OSS 120B 417 tok/s Pro
Claude Sonnet 4.5 39 tok/s Pro
2000 character limit reached

Reconciling Spatial and Temporal Abstractions for Goal Representation (2401.09870v2)

Published 18 Jan 2024 in cs.LG, cs.AI, and cs.RO

Abstract: Goal representation affects the performance of Hierarchical Reinforcement Learning (HRL) algorithms by decomposing the complex learning problem into easier subtasks. Recent studies show that representations that preserve temporally abstract environment dynamics are successful in solving difficult problems and provide theoretical guarantees for optimality. These methods however cannot scale to tasks where environment dynamics increase in complexity i.e. the temporally abstract transition relations depend on larger number of variables. On the other hand, other efforts have tried to use spatial abstraction to mitigate the previous issues. Their limitations include scalability to high dimensional environments and dependency on prior knowledge. In this paper, we propose a novel three-layer HRL algorithm that introduces, at different levels of the hierarchy, both a spatial and a temporal goal abstraction. We provide a theoretical study of the regret bounds of the learned policies. We evaluate the approach on complex continuous control tasks, demonstrating the effectiveness of spatial and temporal abstractions learned by this approach. Find open-source code at https://github.com/cosynus-lix/STAR.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (26)
  1. Value preserving state-action abstractions. In Silvia Chiappa and Roberto Calandra (eds.), Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics, volume 108 of Proceedings of Machine Learning Research, pp.  1639–1650. PMLR, 26–28 Aug 2020. URL https://proceedings.mlr.press/v108/abel20a.html.
  2. Recent advances in hierarchical reinforcement learning. Discrete Event Dynamic Systems, 13(1), 2003.
  3. Abstract interpretation frameworks. J. Log. Comput., 2(4):511–547, 1992.
  4. Feudal reinforcement learning. In NeurIPS, volume 5, 1992.
  5. Benchmarking deep reinforcement learning for continuous control. In Maria Florina Balcan and Kilian Q. Weinberger (eds.), Proceedings of The 33rd International Conference on Machine Learning, volume 48 of Proceedings of Machine Learning Research, pp.  1329–1338, New York, New York, USA, 20–22 Jun 2016. PMLR. URL https://proceedings.mlr.press/v48/duan16.html.
  6. Search on the replay buffer: Bridging planning and reinforcement learning. In NeurIPS 2019, 2019.
  7. Addressing function approximation error in actor-critic methods. In Jennifer G. Dy and Andreas Krause (eds.), Proceedings of the 35th International Conference on Machine Learning, ICML 2018, Stockholmsmässan, Stockholm, Sweden, July 10-15, 2018, volume 80 of Proceedings of Machine Learning Research, pp.  1582–1591. PMLR, 2018. URL http://proceedings.mlr.press/v80/fujimoto18a.html.
  8. Reconciling deep learning with symbolic artificial intelligence: representing objects and relations. Current Opinion in Behavioral Sciences, 29:17–23, 2019. ISSN 2352-1546. doi: https://doi.org/10.1016/j.cobeha.2018.12.010. Artificial Intelligence.
  9. AI2: safety and robustness certification of neural networks with abstract interpretation. In IEEE Symposium on Security and Privacy, 2018.
  10. Learning actionable representations with goal conditioned policies. In ICLR 2019, 2019.
  11. Symbolic plans as high-level instructions for reinforcement learning. In AAAI, 2020.
  12. Hierarchical deep reinforcement learning: Integrating temporal abstraction and intrinsic motivation. In NeurIPS, volume 29, 2016.
  13. Learning subgoal representations with slow dynamics. In ICLR, 2021.
  14. SDRL: interpretable and data-efficient deep reinforcement learning leveraging symbolic planning. In AAAI, 2019.
  15. Data-efficient hierarchical reinforcement learning. In NeurIPS 2018, 2018.
  16. Near-optimal representation learning for hierarchical reinforcement learning. In ICLR, 2019.
  17. X. Rival and K. Yi. Introduction to Static Analysis: An Abstract Interpretation Perspective. 2020.
  18. Episodic curiosity through reachability. In ICLR, 2019.
  19. Reinforcement Learning: an Introduction. 1998.
  20. Between mdps and semi-mdps: A framework for temporal abstraction in reinforcement learning. Artificial Intelligence, 112, 1999.
  21. Mujoco: A physics engine for model-based control. In 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp.  5026–5033, 2012. doi: 10.1109/IROS.2012.6386109.
  22. Feudal networks for hierarchical reinforcement learning. CoRR, abs/1703.01161, 2017.
  23. Formal security analysis of neural networks using symbolic intervals. In USENIX Security Symposium, 2018.
  24. Christopher J. C. H. Watkins and Peter Dayan. Technical note: q -learning. Mach. Learn., 8(3–4):279–292, may 1992. ISSN 0885-6125. doi: 10.1007/BF00992698. URL https://doi.org/10.1007/BF00992698.
  25. Goal space abstraction in hierarchical reinforcement learning via set-based reachability analysis, 2023.
  26. Adjacency constraint for efficient hierarchical reinforcement learning. IEEE Transactions on Pattern Analysis and Machine Intelligence, 45(4):4152–4166, 2023. doi: 10.1109/TPAMI.2022.3192418.
Citations (1)

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets

This paper has been mentioned in 1 tweet and received 4 likes.

Upgrade to Pro to view all of the tweets about this paper: