Emergent Mind

Do LLM Agents Exhibit Social Behavior?

(2312.15198)
Published Dec 23, 2023 in cs.AI , cs.SI , econ.GN , and q-fin.EC

Abstract

The advances of LLMs are expanding their utility in both academic research and practical applications. Recent social science research has explored the use of these ``black-box'' LLM agents for simulating complex social systems and potentially substituting human subjects in experiments. Our study explore this emerging domain, investigating the extent to which LLMs exhibit key social interaction principles, such as social learning, social preference, and cooperative behavior (indirect reciprocity), in their interactions with humans and other agents. We develop a framework for our study, wherein classical laboratory experiments involving human subjects are adapted to use LLM agents. This approach involves step-by-step reasoning that mirrors human cognitive processes and zero-shot learning to assess the innate preferences of LLMs. Our analysis of LLM agents' behavior includes both the primary effects and an in-depth examination of the underlying mechanisms. Focusing on GPT-4, our analyses suggest that LLM agents appear to exhibit a range of human-like social behaviors such as distributional and reciprocity preferences, responsiveness to group identity cues, engagement in indirect reciprocity, and social learning capabilities. However, our analysis also reveals notable differences: LLMs demonstrate a pronounced fairness preference, weaker positive reciprocity, and a more calculating approach in social learning compared to humans. These insights indicate that while LLMs hold great promise for applications in social science research, such as in laboratory experiments and agent-based modeling, the subtle behavioral differences between LLM agents and humans warrant further investigation. Careful examination and development of protocols in evaluating the social behaviors of LLMs are necessary before directly applying these models to emulate human behavior.

Analysis of social welfare improvement through GPT-4 simulations in group decision-making experiments.

Overview

  • The paper examines LLMs for their ability to exhibit social behaviors similar to humans.

  • A novel framework based on experiments in human social interaction was used to test LLMs like GPT-4.

  • LLM agents showed human-like social preferences and fairness, but differed in reciprocity and social learning.

  • The study suggests LLMs can be useful in social science research but cautions against differences from human behavior.

  • Researchers are encouraged to further investigate LLMs to accurately simulate social interactions in agent-based models.

Overview of the Study

The study inspects the capabilities of LLMs to simulate key social behaviors, a burgeoning area of interest in the realm of artificial intelligence. The researchers developed a novel framework, drawing parallels from classical human behavior experiments, to scrutinize the level of social behavior manifested by LLMs.

Methodology

The paper elaborates on a unique experimental design, adapted from classical human social interaction studies, to evaluate LLM agents, with a particular focus on GPT-4. The model's behavior was analyzed across various social principles including social learning, preferences, and cooperation. Responses from GPT-4 were dissected using mechanisms such as economic modeling and regression analysis to comprehend the intrinsic characteristics driving LLM decisions.

Findings on Social Behavior

LLM agents exhibit certain human-like social tendencies, as suggested by their distributional preferences and responsiveness to group identities, albeit with pronounced differences. For example, LLMs displayed significant fairness concern, showed weaker positive reciprocity compared to humans, and adopted a more analytical stance in social learning scenarios. These observations indicate that while LLMs can replicate aspects of human behavior, the nuances in their social interactions necessitate further exploration.

Implications and Potential for Social Science Research

The study concludes that LLMs like GPT-4 show promise for applications within social science research. They have the potential to simulate complex social interactions, offering valuable insights for fields such as agent-based modeling and policy evaluation. However, researchers should proceed with caution due to the subtle but significant deviations in LLM behavior from human subjects. The paper encourages further examination and careful application of LLMs to ensure accurate representation and utilization in social systems simulations.

Create an account to read this summary for free:

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.