Emergent Mind

Abstract

Most current methods for multi-hop question answering (QA) over knowledge graphs (KGs) only provide final conclusive answers without explanations, such as a set of KG entities that is difficult for normal users to review and comprehend. This issue severely limits the application of KG-based QA in real-world scenarios. However, it is non-trivial to solve due to two challenges: First, annotations of reasoning chains of multi-hop questions, which could serve as supervision for explanation generation, are usually lacking. Second, it is difficult to maintain high efficiency when explicit KG triples need to be retrieved to generate explanations. In this paper, we propose a novel Graph Neural Network-based Two-Step Reasoning model (GNN2R) to solve this issue. GNN2R can provide both final answers and reasoning subgraphs as a rationale behind final answers efficiently with only weak supervision that is available through question-final answer pairs. We extensively evaluated GNN2R with detailed analyses in experiments. The results demonstrate that, in terms of effectiveness, efficiency, and quality of generated explanations, GNN2R outperforms existing state-of-the-art methods that are applicable to this task. Our code and pre-trained models are available at https://github.com/ruijie-wang-uzh/GNN2R.

Overview

  • GNN2R is a new model for Knowledge Graph-based Question Answering that provides answers and explanations even with limited training data.

  • The model uses a two-step process involving graph neural networks for initial coarse reasoning and language models for explicit reasoning.

  • GNN2R outperforms existing methods in giving accurate answers and comprehensible rationales on benchmark datasets for multi-hop QA.

  • Weakly-supervised learning is utilized, requiring only question-answer pairs, avoiding the need for fully annotated reasoning paths.

  • The model holds promise for future applications such as KG chatbots, enhancing transparency and trust in AI systems through clear explanations.

In the realm of artificial intelligence, one area that continues to grow in importance is the ability of systems to not only provide answers based on large sets of data but also to explain the rationale behind those answers. This is particularly critical when dealing with complex data structures like Knowledge Graphs (KGs), which are a form of storing information that connects entities through relationships, resembling a network of interconnected data points.

The recent innovative model dubbed GNN2R (Graph Neural Network-based Two-Step Reasoning), addresses two key challenges in Knowledge Graph-based Question Answering (KG-based QA): how to generate explanations for the provided answers and how to do this with limited training data—referred to as weak supervision. Moreover, the model emphasizes the need for high efficiency, recognizing that speed is a crucial factor in real-world applications.

The essence of GNN2R lies in its two-step process. In the first step, called GNN-based coarse reasoning, a newly devised graph neural network (GNN) is employed to encode the question and entities from a KG into a joint space where questions can be effectively matched with their answers. This is critical for quickly narrowing down potential responses and associated justifications. In the second step, dubbed LM-based explicit reasoning, a language model (LM) is refined through custom algorithms to sift through candidate reasoning subgraphs. It selects those that semantically align with the posed question, thereby concluding with both the correct answer and a clear justification based on the knowledge graph's structure.

An extensive series of experiments conducted on commonly adopted benchmark datasets reveal that GNN2R outperforms state-of-the-art methods in KG-based QA, providing both accurate answers and more importantly, rationales that are understandable to users. On datasets that benchmark multi-hop QA, where understanding relationships across multiple layers is necessary, GNN2R presents significant improvements. Moreover, it also performs efficiently, with the ability to answer questions within the time ranges that are considered non-disruptive in interactive user experiences.

An interesting component of the GNN2R approach is the weakly supervised learning aspect, in which only question-answer pairs are used for training rather than requiring fully annotated reasoning paths or explanations. This is an important feature considering that, in real-world scenarios, obtaining full annotations can be costly or infeasible.

A unique contribution of GNN2R to the field of explainable AI and KGs is the quality of its generated explanations. Unlike previous approaches, which might provide reasoning chains that are either too general or not semantically aligned with the user's query, GNN2R ensures that its generated rationalizations are both concise and relevant, addressing the user's intent accurately. This brings a substantial improvement in terms of the precision, recall, and overall quality of explanations compared to other reasoning-based methods.

The model's potential extends beyond simply answering the question of which entity or entities satisfy the query. It could pave the way for building KG chatbots—systems capable of holding conversations with users while providing justifiable and transparent responses generated from vast interconnected datasets.

In conclusion, GNN2R represents a significant step forward in the integration of knowledge graphs and artificial intelligence. Not only does it provide a method for efficiently and correctly answering complex multi-hop questions, but it also enhances user trust and understanding by offering clear and justifiable reasoning paths. As the authors look toward future work, they envision generating even more natural explanations and putting them to the test in real-world scenarios, continuing the advancement towards more transparent and user-friendly AI systems.

Create an account to read this summary for free:

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.