Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 62 tok/s
Gemini 2.5 Pro 48 tok/s Pro
GPT-5 Medium 17 tok/s Pro
GPT-5 High 13 tok/s Pro
GPT-4o 101 tok/s Pro
Kimi K2 217 tok/s Pro
GPT OSS 120B 474 tok/s Pro
Claude Sonnet 4 36 tok/s Pro
2000 character limit reached

ScriptWorld: Text Based Environment For Learning Procedural Knowledge (2307.03906v1)

Published 8 Jul 2023 in cs.CL, cs.AI, cs.LG, and cs.MA

Abstract: Text-based games provide a framework for developing natural language understanding and commonsense knowledge about the world in reinforcement learning based agents. Existing text-based environments often rely on fictional situations and characters to create a gaming framework and are far from real-world scenarios. In this paper, we introduce ScriptWorld: a text-based environment for teaching agents about real-world daily chores and hence imparting commonsense knowledge. To the best of our knowledge, it is the first interactive text-based gaming framework that consists of daily real-world human activities designed using scripts dataset. We provide gaming environments for 10 daily activities and perform a detailed analysis of the proposed environment. We develop RL-based baseline models/agents to play the games in Scriptworld. To understand the role of LLMs in such environments, we leverage features obtained from pre-trained LLMs in the RL agents. Our experiments show that prior knowledge obtained from a pre-trained LLM helps to solve real-world text-based gaming environments. We release the environment via Github: https://github.com/Exploration-Lab/ScriptWorld

Definition Search Book Streamline Icon: https://streamlinehq.com
References (43)
  1. Learning Dynamic Belief Graphs to Generalize on Text-Based Games. In NeurIPS, 2020.
  2. LeDeepChef: Deep Reinforcement Learning Agent for Families of Text-Based Games. In AAAI, 2020.
  3. Graph Constrained Reinforcement Learning for Natural Language Action Spaces. In International Conference on Learning Representations, 2020.
  4. Transfer in Deep Reinforcement Learning Using Knowledge Graphs. In Proceedings of the Thirteenth Workshop on Graph-Based Methods for Natural Language Processing (TextGraphs-13), 2019.
  5. Multiple Sequence Alignment Modeling: Methods and Applications. Briefings in Bioinformatics, 2016.
  6. Bootstrapped Q-learning with Context Relevant Observation Pruning to Generalize in Text-based Games. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), 2020.
  7. TextWorld: A Learning Environment for Text-based Games. In CGW@IJCAI, 2018.
  8. A Hierarchical Bayesian Model for Unsupervised Induction of Script Knowledge. In Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics, 2014.
  9. Interactive Fiction Games: A Colossal Adventure. In AAAI, 2020.
  10. Deep Reinforcement Learning with a Natural Language Action Space. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, 2016.
  11. Understanding Grounded Language Learning Agents. arXiv preprint arXiv:1710.09867, 2017.
  12. Skip N-grams and Ranking Functions for Predicting Script Events. In Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics, 2012.
  13. Planning and Acting in Partially Observable Stochastic Domains. Artificial Intelligence, 1998.
  14. The NetHack Learning Environment. In NeurIPS, 2020.
  15. Subsymbolic Natural Language Processing: An Integrated Model of Scripts, Lexicon, and Memory. MIT press, 1993.
  16. Playing Atari with Deep Reinforcement Learning. arXiv preprint arXiv:1312.5602, 2013.
  17. Asynchronous Methods for Deep Reinforcement Learning. In International Conference on Machine Learning, 2016.
  18. Inducing Neural Models of Script Knowledge. In Proceedings of the Eighteenth Conference on Computational Natural Language Learning, 2014.
  19. InScript: Narrative texts annotated with script information. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC’16), 2016.
  20. Modeling Semantic Expectation: Using Script Knowledge for Referent Prediction. Transactions of the Association for Computational Linguistics, 2017.
  21. Ashutosh Modi. Event Embeddings for Semantic Script Modeling. In Proceedings of the 20th SIGNLL Conference on Computational Natural Language Learning, 2016.
  22. Ashutosh Modi. Modeling Common Sense Knowledge via Scripts. PhD thesis, Saarland University, 2017.
  23. Text-based RL Agents with Commonsense Knowledge: New Challenges, Environments and Baselines. In AAAI Conference on Artificial Intelligence, 2020.
  24. Language Understanding for Text-based Games using Deep Reinforcement Learning. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, 2015.
  25. Toward Understanding Catastrophic Forgetting in Continual Learning. arXiv preprint arXiv:1908.01091, abs/1908.01091, 2019.
  26. MCScript: A Novel Dataset for Assessing Machine Comprehension Using Script Knowledge. In LREC, 2018.
  27. SemEval-2018 Task 11: Machine Comprehension Using Commonsense Knowledge. In Proceedings of the 12th International Workshop on Semantic Evaluation, 2018.
  28. GloVe: Global Vectors for Word Representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), 2014.
  29. Using Sentence-Level LSTM Language Models for Script Inference. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, 2016.
  30. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9, 2019.
  31. Learning Script Knowledge with Web Experiments. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, 2010.
  32. Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), 2019.
  33. Learning to predict script events from domain-specific text. In Proceedings of the Fourth Joint Conference on Lexical and Computational Semantics, 2015.
  34. proScript: Partially Ordered Scripts Generation. In Findings of the Association for Computational Linguistics: EMNLP, 2021.
  35. What do Large Language Models Learn about Scripts? In Proceedings of the 11th Joint Conference on Lexical and Computational Semantics, 2022.
  36. Scripts, Plans, and Knowledge. In Proceedings of the 4th International Joint Conference on Artificial Intelligence, IJCAI, 1975.
  37. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347, 2017.
  38. Pre-trained Language Models as Prior Knowledge for Playing Text-based Games. In 21st International Conference on Autonomous Agents and Multiagent Systems, AAMAS, 2022.
  39. Reinforcement Learning: An Introduction. MIT press, 2018.
  40. A Crowdsourced Database of Event Sequence Descriptions for the Acquisition of High-quality Script Knowledge. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC’16), 2016.
  41. Keep CALM and Explore: Language Models for Action Generation in Text-based Games. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), 2020.
  42. Learn how to cook a new recipe in a new house: Using map familiarization, curriculum learning, and bandit feedback to learn families of text-based adventure games. arXiv preprint arXiv:1908.04777, 2019.
  43. Interactive Language Learning by Question Answering. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), 2019.
Citations (3)

Summary

We haven't generated a summary for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Lightbulb On Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube