Emergent Mind

Abstract

Fact-checking is a popular countermeasure against misinformation but the massive volume of information online has spurred active research in the automation of the task. Like expert fact-checking, it is not enough for an automated fact-checker to just be accurate, but also be able to inform and convince the user of the validity of its prediction. This becomes viable with explainable artificial intelligence (XAI). In this work, we conduct a study of XAI fact-checkers involving 180 participants to determine how users' actions towards news and their attitudes towards explanations are affected by the XAI. Our results suggest that XAI has limited effects on users' agreement with the veracity prediction of the automated fact-checker and on their intents to share news. However, XAI does nudge them towards forming uniform judgments of news veracity, thereby signaling a reliance on the explanations. We also found polarizing preferences towards XAI, raising several design considerations on these.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.