Emergent Mind

Abstract

Warning users about misinformation on social media is not a simple usability task. Soft moderation has to balance between debunking falsehoods and avoiding moderation bias while preserving the social media consumption flow. Platforms thus employ minimally distinguishable warning tags with generic text under a suspected misinformation content. This approach resulted in an unfavorable outcome where the warnings "backfired" and users believed the misinformation more, not less. In response, we developed enhancements to the misinformation warnings where users are advised on the context of the information hazard and exposed to standard warning iconography. We ran an A/B evaluation with the Twitter's original warning tags in a 337 participant usability study. The majority of the participants preferred the enhancements as a nudge toward recognizing and avoiding misinformation. The enhanced warning tags were most favored by the politically left-leaning and to a lesser degree moderate participants, but they also appealed to roughly a third of the right-leaning participants. The education level was the only demographic factor shaping participants' preferences. We use our findings to propose user-tailored improvements in the soft moderation of misinformation on social media.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.