Revisiting the Reliability of Psychological Scales on Large Language Models (2305.19926v5)
Abstract: Recent research has focused on examining LLMs' (LLMs) characteristics from a psychological standpoint, acknowledging the necessity of understanding their behavioral characteristics. The administration of personality tests to LLMs has emerged as a noteworthy area in this context. However, the suitability of employing psychological scales, initially devised for humans, on LLMs is a matter of ongoing debate. Our study aims to determine the reliability of applying personality assessments to LLMs, explicitly investigating whether LLMs demonstrate consistent personality traits. Analysis of 2,500 settings per model, including GPT-3.5, GPT-4, Gemini-Pro, and LLaMA-3.1, reveals that various LLMs show consistency in responses to the Big Five Inventory, indicating a satisfactory level of reliability. Furthermore, our research explores the potential of GPT-3.5 to emulate diverse personalities and represent various groups-a capability increasingly sought after in social sciences for substituting human participants with LLMs to reduce costs. Our findings reveal that LLMs have the potential to represent different personalities with specific prompt instructions.
- Concurrent and predictive validity designs: A critical reanalysis. Journal of Applied Psychology, 66(1):1, 1981.
- Personality testing of gpt-3: Limited temporal reliability, but highlighted social desirability of gpt-3’s personality instruments results. arXiv preprint arXiv:2306.04308, 2023.
- Sparks of artificial general intelligence: Early experiments with gpt-4. arXiv preprint arXiv:2303.12712, 2023.
- Evaluating the feasibility of chatgpt in healthcare: an analysis of multiple clinical and research scenarios. Journal of Medical Systems, 47(1):33, 2023.
- Constructing validity: New developments in creating objective measuring instruments. Psychological assessment, 31(12):1412, 2019.
- Inducing anxiety in large language models increases exploration and bias. arXiv preprint arXiv:2304.11111, 2023.
- Lee J Cronbach. Coefficient alpha and the internal structure of tests. psychometrika, 16(3):297–334, 1951.
- Can large language models provide feedback to students? a case study on chatgpt. 2023.
- Large language models are zero-shot fuzzers: Fuzzing deep-learning libraries via large language models. In Proceedings of the 32nd ACM SIGSOFT International Symposium on Software Testing and Analysis, pp. 423–435, 2023.
- How ready are pre-trained abstractive models and llms for legal case judgement summarization? arXiv preprint arXiv:2306.01248, 2023.
- Toxicity in chatgpt: Analyzing persona-assigned language models. arXiv preprint arXiv:2304.05335, 2023.
- Can ai language models replace human participants? Trends in Cognitive Sciences, 2023.
- A revised version of the psychoticism scale. Personality and individual differences, 6(1):21–29, 1985.
- Automated repair of programs from large language models. In 2023 IEEE/ACM 45th International Conference on Software Engineering (ICSE), pp. 1469–1481. IEEE, 2023.
- Regional personality assessment through social media language. Journal of personality, 90(3):405–425, 2022.
- Investigating the applicability of self-assessment tests for personality measurement of large language models. arXiv preprint arXiv:2309.08163, 2023.
- Louis Guttman. A basis for analyzing test-retest reliability. Psychometrika, 10(4):255–282, 1945.
- Thilo Hagendorff. Machine psychology: Investigating emergent capabilities and behavior in large language models using psychological methods. arXiv preprint arXiv:2303.13988, 2023.
- Ai language models cannot replace human research participants. AI & SOCIETY, 2023.
- Emotionally numb or empathetic? evaluating how llms feel using emotionbench. arXiv preprint arXiv:2308.03656, 2023a.
- Who is chatgpt? benchmarking llms’ psychological portrayal using psychobench. arXiv preprint arXiv:2310.01386, 2023b.
- Evaluating and inducing personality in pre-trained language models. arXiv preprint arXiv:2206.07550, 2022.
- Personallm: Investigating the ability of gpt-3.5 to express personality traits and gender differences. arXiv preprint arXiv:2305.02547, 2023.
- Is chatgpt a good translator? a preliminary study. arXiv preprint arXiv:2301.08745, 2023.
- The big-five trait taxonomy: History, measurement, and theoretical perspectives. Handbook of personality: theory and research, 1999.
- Ai personification: Estimating the personality of language models. arXiv preprint arXiv:2204.12000, 2022.
- Personality differences across regions of the united states. The Journal of social psychology, 91(1):73–79, 1973.
- Chatgpt beyond english: Towards a comprehensive evaluation of large language models in multilingual learning. arXiv preprint arXiv:2304.05613, 2023.
- Is gpt-3 a psychopath? evaluating large language models from a psychological perspective. arXiv preprint arXiv:2212.10529, 2022.
- Leveraging word guessing games to assess the intelligence of large language models. arXiv preprint arXiv:2310.20499, 2023.
- Samuel Messick. Test validity: A matter of consequence. Social Indicators Research, 45:35–44, 1998.
- Who is GPT-3? an exploration of personality, values and demographics. In Proceedings of the Fifth Workshop on Natural Language Processing and Computational Social Science (NLP+CSS), pp. 218–227, Abu Dhabi, UAE, November 2022. Association for Computational Linguistics. URL https://aclanthology.org/2022.nlpcss-1.24.
- Isabel Briggs Myers. The Myers-Briggs Type Indicator: Manual (1962). Consulting Psychologists Press, 1962.
- Regional personality differences in great britain. PloS one, 10(3):e0122245, 2015.
- Do gpt language models suffer from split personality disorder? the advent of substrate-free psychometrics. Research Square preprint, 2023. doi: 10.21203/rs.3.rs-2717108/v1.
- The self-perception and political biases of chatgpt. arXiv preprint arXiv:2304.07333, 2023.
- Personality traits in large language models. arXiv preprint arXiv:2307.00184, 2023.
- Whose opinions do language models reflect? arXiv preprint arXiv:2303.17548, 2023.
- Character-llm: A trainable agent for role-playing. arXiv preprint arXiv:2310.10158, 2023.
- You don’t need a personality test to know these models are unreliable: Assessing the reliability of large language models on psychometric instruments. arXiv preprint arXiv:2311.09718, 2023.
- Have large language models developed a personality?: Applicability of self-assessment tests in measuring personality in llms. arXiv preprint arXiv:2305.14693, 2023.
- Development of personality in early and middle adulthood: Set like plaster or persistent change? Journal of personality and social psychology, 84(5):1041, 2003.
- All languages matter: On the multilingual safety of large language models. arXiv preprint arXiv:2310.00905, 2023a.
- Does role-playing chatbots capture the character personalities? assessing personality traits for role-playing chatbots. arXiv preprint arXiv:2310.17976, 2023b.
- Rolellm: Benchmarking, eliciting, and enhancing role-playing abilities of large language models. arXiv preprint arXiv:2310.00746, 2023c.
- Chain of thought prompting elicits reasoning in large language models. arXiv preprint arXiv:2201.11903, 2022.
- Chatgpt or grammarly? evaluating chatgpt on grammatical error correction benchmark. arXiv preprint arXiv:2303.13648, 2023.
- Calibrate before use: Improving few-shot performance of language models. In International Conference on Machine Learning, pp. 12697–12706. PMLR, 2021.
- Exploring ai ethics of chatgpt: A diagnostic analysis. arXiv preprint arXiv:2301.12867, 2023.