Emergent Mind

Using NLU in Context for Question Answering: Improving on Facebook's bAbI Tasks

(1709.04558)
Published Sep 13, 2017 in cs.CL and cs.AI

Abstract

For the next step in human to machine interaction, AI should interact predominantly using natural language because, if it worked, it would be the fastest way to communicate. Facebook's toy tasks (bAbI) provide a useful benchmark to compare implementations for conversational AI. While the published experiments so far have been based on exploiting the distributional hypothesis with machine learning, our model exploits natural language understanding (NLU) with the decomposition of language based on Role and Reference Grammar (RRG) and the brain-based Patom theory. Our combinatorial system for conversational AI based on linguistics has many advantages: passing bAbI task tests without parsing or statistics while increasing scalability. Our model validates both the training and test data to find 'garbage' input and output (GIGO). It is not rules-based, nor does it use parts of speech, but instead relies on meaning. While Deep Learning is difficult to debug and fix, every step in our model can be understood and changed like any non-statistical computer program. Deep Learning's lack of explicable reasoning has raised opposition to AI, partly due to fear of the unknown. To support the goals of AI, we propose extended tasks to use human-level statements with tense, aspect and voice, and embedded clauses with junctures: and answers to be natural language generation (NLG) instead of keywords. While machine learning permits invalid training data to produce incorrect test responses, our system cannot because the context tracking would need to be intentionally broken. We believe no existing learning systems can currently solve these extended natural language tests. There appears to be a knowledge gap between NLP researchers and linguists, but ongoing competitive results such as these promise to narrow that gap.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.