Emergent Mind

Abstract

Intelligent voice assistants, such as Apple Siri and Amazon Alexa, are widely used nowadays. These task-oriented dialog systems require a semantic parsing module in order to process user utterances and understand the action to be performed. This semantic parsing component was initially implemented by rule-based or statistical slot-filling approaches for processing simple queries; however, the appearance of more complex utterances demanded the application of shift-reduce parsers or sequence-to-sequence models. While shift-reduce approaches initially demonstrated to be the best option, recent efforts on sequence-to-sequence systems pushed them to become the highest-performing method for that task. In this article, we advance the research on shift-reduce semantic parsing for task-oriented dialog. In particular, we implement novel shift-reduce parsers that rely on Stack-Transformers. These allow to adequately model transition systems on the cutting-edge Transformer architecture, notably boosting shift-reduce parsing performance. Additionally, we adapt alternative transition systems from constituency parsing to task-oriented parsing, and empirically prove that the in-order algorithm substantially outperforms the commonly-used top-down strategy. Finally, we extensively test our approach on multiple domains from the Facebook TOP benchmark, improving over existing shift-reduce parsers and state-of-the-art sequence-to-sequence models in both high-resource and low-resource settings.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.