Emergent Mind

Abstract

Misinformation posting and spreading in Social Media is ignited by personal decisions on the truthfulness of news that may cause wide and deep cascades at a large scale in a fraction of minutes. When individuals are exposed to information, they usually take a few seconds to decide if the content (or the source) is reliable, and eventually to share it. Although the opportunity to verify the rumour is often just one click away, many users fail to make a correct evaluation. We studied this phenomenon with a web-based questionnaire that was compiled by 7,298 different volunteers, where the participants were asked to mark 20 news as true or false. Interestingly, false news is correctly identified more frequently than true news, but showing the full article instead of just the title, surprisingly, does not increase general accuracy. Also, displaying the original source of the news may contribute to mislead the user in some cases, while a genuine wisdom of the crowd can positively assist individuals' ability to classify correctly. Finally, participants whose browsing activity suggests a parallel fact-checking activity, show better performance and declare themselves as young adults. This work highlights a series of pitfalls that can influence human annotators when building false news datasets, which in turn fuel the research on the automated fake news detection; furthermore, these findings challenge the common rationale of AI that suggest users to read the full article before re-sharing.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.