Emergent Mind

Abstract

In previous research, knowledge-selection tasks mostly rely on language model-based methods or knowledge ranking. However, while approaches that rely on the language models take all knowledge as sequential input, knowledge does not contain sequential information in most circumstances. On the other hand, the knowledge-ranking methods leverage dialog history and each given knowledge snippet separately, but they do not consider information between knowledge snippets. In the Tenth Dialog System Technology Challenges (DSTC10), we participated in the second Knowledge-grounded Task-oriented Dialogue Modeling on Spoken Conversations. To deal with the problems mentioned above, we modified training methods based on state-of-the-art (SOTA) models for the first and third sub-tasks. As for the second sub-task of knowledge selection, we proposed Graph-Knowledge Selector (GKS), utilizing a graph-attention base model incorporated with the language model. GKS makes knowledge-selection decisions in the dialog by simultaneously considering each knowledge embedding generated from the language model without sequential features. Moreover, GKS leverages considerable knowledge in decision-making and takes relations across knowledge as part of the selection process. As a result, GKS outperforms several SOTA models proposed in the data-set on knowledge selection from the Ninth Dialog System Technology Challenges (DSTC9).

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.