Emergent Mind

Abstract

The variations between in-group and out-group speech (intergroup bias) are subtle and could underlie many social phenomena like stereotype perpetuation and implicit bias. In this paper, we model the intergroup bias as a tagging task on English sports comments from forums dedicated to fandom for NFL teams. We curate a unique dataset of over 6 million game-time comments from opposing perspectives (the teams in the game), each comment grounded in a non-linguistic description of the events that precipitated these comments (live win probabilities for each team). Expert and crowd annotations justify modeling the bias through tagging of implicit and explicit referring expressions and reveal the rich, contextual understanding of language and the world required for this task. For large-scale analysis of intergroup variation, we use LLMs for automated tagging, and discover that some LLMs perform best when prompted with linguistic descriptions of the win probability at the time of the comment, rather than numerical probability. Further, large-scale tagging of comments using LLMs uncovers linear variations in the form of referent across win probabilities that distinguish in-group and out-group utterances. Code and data are available at https://github.com/venkatasg/intergroup-nfl .

Parallel language corpus of NFL subreddit comments, tagged with relevant entities and grounded in live win probabilities.

Overview

  • The paper investigates subtle variations between in-group and out-group speech in English sports comments from NFL team forums, exploring how these variations contribute to social phenomena like stereotype perpetuation and implicit bias.

  • Utilizing a dataset of over 6 million game-time comments, the study evaluates the ability of LLMs to tag comments for implicit and explicit referring expressions, finding that models like GPT-4o and finetuned Llama-3 outperform human baselines in this task.

  • Key findings include a decrease in in-group references with higher win probabilities and stable out-group references regardless of game outcomes, suggesting nuanced changes in language use contingent on win probability, and highlighting potential applications in mitigating bias in various forms of communication.

Interpreting Referring Expressions in Intergroup Bias

The paper "Do they mean 'us'? Interpreting Referring Expressions in Intergroup Bias" by Govindarajan et al., addresses the subtle variations between in-group and out-group speech that could underlie several social phenomena such as stereotype perpetuation and implicit bias. Through a meticulous analysis of English sports comments from NFL team forums, this paper provides an in-depth exploration and modeling of intergroup bias as a tagging task. This essay summarizes the paper's methodology, results, and potential implications for future research in AI and NLP.

Methodology

The authors constructed a unique dataset of over 6 million game-time comments from forums dedicated to NFL teams. This dataset is grounded in live win probabilities, providing a non-linguistic description of the game events that triggered these comments. The task of tagging comments for implicit and explicit referring expressions required annotators and LLMs to distinguish between in-group, out-group, and other references.

Expert and crowd annotations guided the initial phase, showcasing the complex contextual language understanding required. The authors explored whether LLMs can be used for large-scale tagging, discovering that some models perform optimally when prompted with linguistic descriptions of win probabilities rather than numerical data. This prompted a deep dive into the performance of LLMs such as GPT-4o and Llama-3 in automated tagging under various conditions.

Results

The paper's significant numerical results demonstrated that both GPT-4o and finetuned Llama-3 exceed human baseline performance in tagging tasks. Specifically, Llama-3 edged out GPT-4o in overall performance, with GPT-4o excelling at identifying out-group and other references due to its extensive parametric knowledge. Notably, providing win probability in linguistic form improved GPT-4o's identification of in-group references in few-shot settings.

Two primary linguistic behaviors were uncovered during large-scale analysis:

  1. Decrease in In-Group References with Higher Win Probability: Commenters were more likely to abstract away from referring to the in-group as the win probability increased.
  2. Stability in Out-Group References: References to the out-group remained rarer and stable across all win probabilities.

These findings enhance our understanding of the Linguistic Intergroup Bias (LIB) hypothesis, suggesting that language subtly varies with win probability.

Implications

The research elucidates nuanced forms of social bias in language that go beyond explicit derogatory terms or behaviors. Understanding how language systematically changes when referring to in-group versus out-group members, contingent upon the state of the world, opens up new avenues for analyzing and mitigating bias in communication.

From a practical perspective, the automated tagging framework can be adapted to other domains where intergroup bias is prevalent, such as political commentary, corporate communications, and media reporting. The methodology used in this study can be instrumental in training AI models to recognize and address social biases embedded in everyday language.

In terms of future developments, there is scope for improving how LLMs incorporate numerical data like win probabilities into language understanding and generation tasks. Exploring larger scale models or mixtures of experts that understand numerical reasoning better while maintaining strong linguistic performance could yield even more accurate models for identifying intergroup bias.

Moreover, parallel datasets from various sports or domains can validate the linear relationships observed and help generalize the findings across different contexts. The exploration of how emotional valence interacts with intergroup bias in language remains an intriguing avenue for further research.

Conclusion

Govindarajan et al.'s work represents a systematic approach to identifying and modeling the subtleties of intergroup bias in language. The paper's detailed methodology, combined with robust LLM performance analysis, provides significant insights into the linguistic behaviors associated with social biases. These insights have substantial implications for the development of AI models aimed at mitigating bias, offering a framework that can be extended to various real-world applications. The findings advocate for the nuanced understanding of social biases and their manifestation in language, highlighting the continued relevance and importance of interdisciplinary research at the intersection of linguistics, social science, and artificial intelligence.

Create an account to read this summary for free:

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.