Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
129 tokens/sec
GPT-4o
28 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Learning Symbolic Rules over Abstract Meaning Representations for Textual Reinforcement Learning (2307.02689v1)

Published 5 Jul 2023 in cs.CL

Abstract: Text-based reinforcement learning agents have predominantly been neural network-based models with embeddings-based representation, learning uninterpretable policies that often do not generalize well to unseen games. On the other hand, neuro-symbolic methods, specifically those that leverage an intermediate formal representation, are gaining significant attention in language understanding tasks. This is because of their advantages ranging from inherent interpretability, the lesser requirement of training data, and being generalizable in scenarios with unseen data. Therefore, in this paper, we propose a modular, NEuro-Symbolic Textual Agent (NESTA) that combines a generic semantic parser with a rule induction system to learn abstract interpretable rules as policies. Our experiments on established text-based game benchmarks show that the proposed NESTA method outperforms deep reinforcement learning-based techniques by achieving better generalization to unseen test games and learning from fewer training interactions.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (41)
  1. Learning dynamic belief graphs to generalize on text-based games. In Advances in Neural Information Processing Systems, volume 33, pages 3045–3057. Curran Associates, Inc.
  2. Leonard Adolphs and Thomas Hofmann. 2020. Ledeepchef deep reinforcement learning agent for families of text-based games. Proceedings of the AAAI Conference on Artificial Intelligence, 34(05):7342–7349.
  3. Prithviraj Ammanabrolu and Matthew Hausknecht. 2020. Graph constrained reinforcement learning for natural language action spaces. In International Conference on Learning Representations.
  4. Prithviraj Ammanabrolu and Mark Riedl. 2019. Playing text-adventure games with graph-based deep reinforcement learning. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 3557–3565, Minneapolis, Minnesota. Association for Computational Linguistics.
  5. Case-based reasoning for better generalization in textual reinforcement learning. In International Conference on Learning Representations.
  6. Abstract meaning representation for sembanking. In Proceedings of the 7th linguistic annotation workshop and interoperability with discourse, pages 178–186.
  7. A hybrid neuro-symbolic approach for text-based games using inductive logic programming. In Combining Learning and Reasoning: Programming Languages, Formalisms, and Representations.
  8. Shu Cai and Kevin Knight. 2013. Smatch: an evaluation metric for semantic feature structures. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 748–752.
  9. Bootstrapped Q-learning with context relevant observation pruning to generalize in text-based games. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 3002–3008.
  10. Neuro-symbolic approaches for text-based policy learning. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 3073–3078.
  11. Fast neural models for symbolic regression at scale. arXiv preprint arXiv:2007.10784.
  12. Textworld: A learning environment for text-based games. arXiv preprint arXiv:1806.11532.
  13. Neural logic machines. arXiv preprint arXiv:1904.11694.
  14. Boris Galitsky. 2020. Employing abstract meaning representation to lay the last-mile toward reading comprehension. In Artificial Intelligence for Customer Relationship Management, pages 57–86. Springer.
  15. Interactive fiction games: A colossal adventure. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 7903–7910.
  16. Deep reinforcement learning with a natural language action space. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1621–1630.
  17. Zhengyao Jiang and Shan Luo. 2019. Neural logic reinforcement learning. In International conference on machine learning, pages 3110–3119. PMLR.
  18. Planning and acting in partially observable stochastic domains. Artificial intelligence, 101(1-2):99–134.
  19. Leveraging abstract meaning representation for knowledge base question answering. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, pages 3884–3894.
  20. Neuro-symbolic reinforcement learning with first-order logic. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 3505–3511.
  21. Paul R Kingsbury and Martha Palmer. 2002. From treebank to propbank. In LREC, pages 1989–1993. Citeseer.
  22. Maximum Bayes Smatch ensemble distillation for AMR parsing. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 5379–5392. Association for Computational Linguistics.
  23. Implicit representations of meaning in neural language models. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1813–1827.
  24. Elisabeth Lien and Milen Kouylekov. 2015. Semantic parsing for textual entailment. In Proceedings of the 14th International Conference on Parsing Technologies, pages 40–49.
  25. Neurologic decoding:(un) supervised neural text generation with predicate logic constraints. In NAACL-HLT.
  26. Learning symbolic rules for interpretable deep reinforcement learning. arXiv preprint arXiv:2103.08228.
  27. Arindam Mitra and Chitta Baral. 2016. Addressing a question answering challenge by combining statistical methods with inductive rule learning and reasoning. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 30.
  28. Asynchronous methods for deep reinforcement learning. In International conference on machine learning, pages 1928–1937. PMLR.
  29. Text-based rl agents with commonsense knowledge: New challenges, environments and baselines.
  30. Efficient text-based reinforcement learning by jointly leveraging state and commonsense graph representations. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 719–725.
  31. Eye of the beholder: Improved relation generalization for text-based reinforcement learning agents. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, pages 11094–11102.
  32. Language understanding for text-based games using deep reinforcement learning. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1–11.
  33. Deep symbolic regression: Recovering mathematical expressions from data via risk-seeking policy gradients. In International Conference on Learning Representations.
  34. Logical neural networks. arXiv preprint arXiv:2006.13155.
  35. Neuro-symbolic inductive logic programming with logical neural networks. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, pages 8212–8219.
  36. Conceptnet 5.5: An open multilingual graph of general knowledge. In Thirty-first AAAI conference on artificial intelligence.
  37. Rik Van Noord and Johan Bos. 2017. Neural semantic parsing by character-based translation: Experiments with abstract meaning representations. arXiv preprint arXiv:1705.09980.
  38. Christopher JCH Watkins and Peter Dayan. 1992. Q-learning. Machine learning, 8(3-4):279–292.
  39. Counting to explore and generalize in text-based games. arXiv preprint arXiv:1806.11525.
  40. Counting to explore and generalize in text-based games.
  41. Amr parsing with action-pointer transformer. arXiv preprint arXiv:2104.14674.
Citations (7)

Summary

We haven't generated a summary for this paper yet.