Emergent Mind

Abstract

As LLMs are integrated with human daily applications rapidly, many societal and ethical concerns are raised regarding the behavior of LLMs. One of the ways to comprehend LLMs' behavior is to analyze their personalities. Many recent studies quantify LLMs' personalities using self-assessment tests that are created for humans. Yet many critiques question the applicability and reliability of these self-assessment tests when applied to LLMs. In this paper, we investigate LLM personalities using an alternate personality measurement method, which we refer to as the external evaluation method, where instead of prompting LLMs with multiple-choice questions in the Likert scale, we evaluate LLMs' personalities by analyzing their responses toward open-ended situational questions using an external machine learning model. We first fine-tuned a Llama2-7B model as the MBTI personality predictor that outperforms the state-of-the-art models as the tool to analyze LLMs' responses. Then, we prompt the LLMs with situational questions and ask them to generate Twitter posts and comments, respectively, in order to assess their personalities when playing two different roles. Using the external personality evaluation method, we identify that the obtained personality types for LLMs are significantly different when generating posts versus comments, whereas humans show a consistent personality profile in these two different situations. This shows that LLMs can exhibit different personalities based on different scenarios, thus highlighting a fundamental difference between personality in LLMs and humans. With our work, we call for a re-evaluation of personality definition and measurement in LLMs.

Overview

  • The paper introduces an alternative approach to assessing personality in LLMs through external evaluation, using a fine-tuned Llama2-7B model.

  • It develops a personality prediction model based on the Myers-Briggs Type Indicator (MBTI) and tests LLMs by evaluating their responses to simulated social media prompts.

  • Findings reveal that LLMs display varied personalities depending on their role, contrasting with the human trait of consistent personality across different contexts.

  • The study validates the external evaluation method by showing consistent personality profiles in human responses and calls for a nuanced understanding of 'personality' in LLMs.

External Evaluation of Personality in LLMs Reveals Role-Dependent Variability

Introduction

The increasing ubiquity of LLMs in societal applications has opened up discussions around their reliability, safety, and ethical use. A crucial aspect of understanding LLM behavior lies in the analysis of their "personalities," a concept traditionally reserved for human psychology. Traditionally, the measurement of personality in LLMs has leaned heavily on self-assessment tests, a method critiqued for its reliability and applicability to non-human entities. This paper presents an alternative approach to personality measurement in LLMs through external evaluation, leveraging a fine-tuned Llama2-7B model. The significant finding of this study is the variance in personalities displayed by LLMs in different roles, challenging the enduring consistency observed in human personality profiles.

Personality Measurement in LLMs

The cornerstone of this research is the development of a state-of-the-art personality prediction model based on the Myers-Briggs Type Indicator (MBTI) personality framework. Utilizing the Llama2-7B model, the researchers achieved significantly superior predictive performance compared to existing models. Therein, two specific configurations were explored: a binary model assessing each MBTI dimension as a separate binary classification problem, and a holistic 16-class model capable of identifying one of the 16 MBTI personality types directly.

The external evaluation process for measuring LLM personality was meticulously designed. LLMs were prompted to generate content akin to Twitter posts and comments based on real-world events and existing tweets, with careful consideration to avoid any pre-training data overlap. The ensuing responses were evaluated using the fine-tuned personality prediction model to discern the LLM's personality.

Findings and Implications

A pivotal revelation of this study is the distinct personality profiles manifested by LLMs in their disparate roles as post generators versus commenters. This contrasts sharply with the human psychological paradigm where personality is considered a stable characteristic across different contexts. The LLMs, depending on the scenario, diverged significantly in the type of personality presented, challenging the notion of a singular, consistent personality within these models.

Moreover, the paper elegantly validates the external evaluation method's efficacy by applying the same approach to human-written posts and comments, showcasing consistency in personality profiles among humans, thereby affirming the reliability of their model.

The Path Forward

This research opens several avenues for future exploration in the realm of AI and psychology. It underscores the necessity for a nuanced understanding and definition of "personality" within LLMs that is distinct from human-centric interpretations. There is a clear call to action for more foundational work on evaluating LLM behavior and personality, taking into account their unique operational and functional paradigms.

Conclusion

In conclusion, the paper "Identifying Multiple Personalities in LLMs with External Evaluation" makes a compelling case for re-evaluating how we understand and measure personality in LLMs. By demonstrating that LLM personalities vary significantly based on the roles they are assigned, it challenges the direct applicability of human personality assessments to AI models and beckons a tailored approach. This work not only enriches the ongoing discourse on AI ethics and safety but also sets a new direction for future research in AI personality assessment methodologies.

Create an account to read this summary for free:

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.