Emergent Mind

Abstract

LLMs are already as persuasive as humans. However, we know very little about how they do it. This paper investigates the persuasion strategies of LLMs, comparing them with human-generated arguments. Using a dataset of 1,251 participants in an experiment, we analyze the persuasion strategies of LLM-generated and human-generated arguments using measures of cognitive effort (lexical and grammatical complexity) and moral-emotional language (sentiment and moral analysis). The study reveals that LLMs produce arguments that require higher cognitive effort, exhibiting more complex grammatical and lexical structures than human counterparts. Additionally, LLMs demonstrate a significant propensity to engage more deeply with moral language, utilizing both positive and negative moral foundations more frequently than humans. In contrast with previous research, no significant difference was found in the emotional content produced by LLMs and humans. These findings contribute to the discourse on AI and persuasion, highlighting the dual potential of LLMs to both enhance and undermine informational integrity through communication strategies for digital persuasion.

Overview

  • The paper examines the persuasion strategies of LLMs compared to human-generated arguments using a dataset of 1,251 participants, focusing on cognitive effort and moral-emotional language.

  • Key findings indicate that LLMs exhibit higher grammatical and lexical complexity and incorporate more moral language than human arguments, although sentiment analysis shows similar emotional polarity.

  • The research points to implications for AI ethics, digital misinformation, and communication sciences, advocating for AI literacy programs and ethical guidelines to manage the persuasive power of LLMs.

Overview of "LLMs are as persuasive as humans, but how? About the cognitive effort and moral-emotional language of LLM arguments."

The paper by Carlos Carrasco-Farré makes a rigorous empirical inquiry into the persuasion strategies employed by LLMs and their comparison with human-generated arguments. The research relies on a dataset of 1,251 participants to evaluate the mechanisms through which LLMs achieve human-parallel persuasion. The study takes into account cognitive effort indicators (lexical and grammatical complexity) and moral-emotional language dimensions (sentiment and morality).

Key Findings

Cognitive Effort:

  • Grammatical Complexity: LLMs produce arguments that require higher cognitive effort due to increased grammatical complexity, with a mean readability score of 13.26 compared to 12.16 for human-generated arguments (p < .001).
  • Lexical Complexity: LLMs also exhibit a higher lexical complexity, with a mean perplexity score of 111.39 compared to 102.69 for human arguments (p < .001).

Moral-Emotional Language:

  • Overall Morality: LLMs incorporate more moral language in their arguments, with a significantly higher morality score (mean = 12.09) compared to human arguments (mean = 9.91, p < .001).
  • Positive and Negative Moral Foundations: LLMs utilize both positive (e.g., care, fairness, authority) and negative (e.g., harm, cheating) moral foundations more frequently than human counterparts.

Sentiment Analysis:

  • No significant difference in sentiment was found between LLM and human arguments, with both showing similar emotional polarity (p < .980).

Prompt Sensitivity:

  • Different prompts (e.g., Compelling Case, Expert Rhetorics, Logical Reasoning) lead LLMs to vary their cognitive and moral-emotional language strategies, yet all LLM-generated arguments maintain higher complexity and moral content compared to human arguments across different prompting styles.

Implications and Future Developments

The findings highlight several relevant implications for digital misinformation, AI ethics, and communication sciences. While LLMs show human-level persuasive power, they achieve this through different strategies involving higher cognitive load and more frequent usage of moral language. These results contribute to understanding the dual potential of LLMs to both enhance and undermine informational integrity.

Practical Implications

  1. Communication Strategy: The research provides guidance for communication scientists in how information processing influences persuasion. The complexity and moral-emotional depth in LLM-generated arguments present new avenues for crafting more engaging content.
  2. Policy and Ethical Guidelines: The potential for LLMs to produce persuasive, morally charged, and complex arguments necessitates the development of robust AI literacy programs and ethical guidelines. This is especially crucial for mitigating risks in democratic processes and other public domains where LLM-generated misinformation can have significant impacts.
  3. Technological and Educational Strategies: The study stresses the importance of equipping policymakers, technologists, and educators with tools to discern AI-generated content and prevent manipulative tactics.

Theoretical Implications

The comparison of LLMs and human arguments adds depth to existing literature on AI communication strategies. Notably, the study suggests that higher cognitive effort, typically assumed to hinder persuasion, can in some contexts promote deeper engagement. Additionally, the pronounced use of moral language by LLMs aligns with theories suggesting that moral-emotional content captures attention and bolsters engagement and persuasiveness.

Future Research Directions

Several avenues for future research emerge from these findings:

  1. Longitudinal Studies: Examining the long-term impact of LLM-generated persuasive content on beliefs and behaviors.
  2. User-AI Interaction: Investigating how user awareness of interacting with LLMs affects persuasion routes and cognitive engagement.
  3. Ethical Implications: Further exploring the ethical ramifications of leveraging moral-emotional language in AI-driven persuasion and its impact on societal norms.
  4. Personalization: Assessing the effectiveness of personalized persuasive messages generated by LLMs across different psychological and demographic profiles.

Conclusion

This paper advances the understanding of LLM persuasion mechanisms by highlighting the differences in cognitive effort and moral-emotional language use compared to human arguments. The insights provided are relevant for both theoretical advancement and practical application in the fields of AI ethics, digital communication, and misinformation countermeasures. The ongoing evolution of LLMs necessitates continued research to adapt and refine ethical guidelines and communication strategies.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.