Emergent Mind

LMExplainer: a Knowledge-Enhanced Explainer for Language Models

(2303.16537)
Published Mar 29, 2023 in cs.CL

Abstract

LLMs such as GPT-4 are very powerful and can process different kinds of NLP tasks. However, it can be difficult to interpret the results due to the multi-layer nonlinear model structure and millions of parameters. A lack of clarity and understanding of how the language models (LMs) work can make them unreliable, difficult to trust, and potentially dangerous for use in real-world scenarios. Most recent works exploit attention weights to provide explanations for LM predictions. However, pure attention-based explanations are unable to support the growing complexity of LMs, and cannot reason about their decision-making processes. We propose LMExplainer, a knowledge-enhanced explainer for LMs that can provide human-understandable explanations. We use a knowledge graph (KG) and a graph attention neural network to extract the key decision signals of the LM. We further explore whether interpretation can also help the AI understand the task better. Our experimental results show that LMExplainer outperforms existing LM+KG methods on CommonsenseQA and OpenBookQA. We compare the explanation results with generated explanation methods and human-annotated results. The comparison shows our method can provide more comprehensive and clearer explanations. LMExplainer demonstrates the potential to enhance model performance and furnish explanations for the LM reasoning process in natural language.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a detailed summary of this paper with a premium account.

We ran into a problem analyzing this paper.

Subscribe by Email

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.