Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Interactive Language Learning by Question Answering (1908.10909v1)

Published 28 Aug 2019 in cs.CL and cs.LG

Abstract: Humans observe and interact with the world to acquire knowledge. However, most existing machine reading comprehension (MRC) tasks miss the interactive, information-seeking component of comprehension. Such tasks present models with static documents that contain all necessary information, usually concentrated in a single short substring. Thus, models can achieve strong performance through simple word- and phrase-based pattern matching. We address this problem by formulating a novel text-based question answering task: Question Answering with Interactive Text (QAit). In QAit, an agent must interact with a partially observable text-based environment to gather information required to answer questions. QAit poses questions about the existence, location, and attributes of objects found in the environment. The data is built using a text-based game generator that defines the underlying dynamics of interaction with the environment. We propose and evaluate a set of baseline models for the QAit task that includes deep reinforcement learning agents. Experiments show that the task presents a major challenge for machine reading systems, while humans solve it with relative ease.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Xingdi Yuan (46 papers)
  2. Jie Fu (229 papers)
  3. Zhouhan Lin (57 papers)
  4. Christopher Pal (97 papers)
  5. Yoshua Bengio (601 papers)
  6. Adam Trischler (50 papers)
  7. Marc-Alexandre Cote (4 papers)
Citations (46)

Summary

We haven't generated a summary for this paper yet.