We introduce a new type of test, called a Turing Experiment (TE), for evaluating to what extent a given language model, such as GPT models, can simulate different aspects of human behavior. A TE can also reveal consistent distortions in a language model's simulation of a specific human behavior. Unlike the Turing Test, which involves simulating a single arbitrary individual, a TE requires simulating a representative sample of participants in human subject research. We carry out TEs that attempt to replicate well-established findings from prior studies. We design a methodology for simulating TEs and illustrate its use to compare how well different language models are able to reproduce classic economic, psycholinguistic, and social psychology experiments: Ultimatum Game, Garden Path Sentences, Milgram Shock Experiment, and Wisdom of Crowds. In the first three TEs, the existing findings were replicated using recent models, while the last TE reveals a "hyper-accuracy distortion" present in some language models (including ChatGPT and GPT-4), which could affect downstream applications in education and the arts.
We're not able to analyze this paper right now due to high demand.
Please check back later (sorry!).
Generate a detailed summary of this paper with a premium account.
We ran into a problem analyzing this paper.
Suicide risk assessment and intervention in people with mental illness. BMJ, 351, 2015. doi: 10.1136/bmj.h4978. https://www.bmj.com/content/351/bmj.h4978.
Language models are few-shot learners. In Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M. F., and Lin, H. (eds.), Advances in Neural Information Processing Systems, volume 33, pp. 1877–1901. Curran Associates, Inc., 2020. https://proceedings.neurips.cc/paper/2020/file/1457c0d6bfcb4967418bfb8ac142f64a-Paper.pdf.
Chivalry and Solidarity in Ultimatum Games. Economic Inquiry, 39(2):171–188, April 2001. https://ideas.repec.org/a/oup/ecinqu/v39y2001i2p171-88.html.
An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization, 3(4):367–388, 1982. ISSN 0167-2681. doi: https://doi.org/10.1016/0167-2681(82)90011-7. https://www.sciencedirect.com/science/article/pii/0167268182900117.
Chapter 2 - experimental economics and experimental game theory. In Glimcher, P. W. and Fehr, E. (eds.), Neuroeconomics (Second Edition), pp. 19–34. Academic Press, San Diego, second edition edition, 2014. ISBN 978-0-12-416008-8. doi: https://doi.org/10.1016/B978-0-12-416008-8.00002-4. https://www.sciencedirect.com/science/article/pii/B9780124160088000024.
Krawczyk, D. C. Chapter 12 - social cognition: Reasoning with others. In Krawczyk, D. C. (ed.), Reasoning, pp. 283–311. Academic Press, 2018. ISBN 978-0-12-809285-9. doi: https://doi.org/10.1016/B978-0-12-809285-9.00012-0. https://www.sciencedirect.com/science/article/pii/B9780128092859000120.
Social influence and the collective dynamics of opinion formation. PLOS ONE, 8(11):1–8, 11 2013. doi: 10.1371/journal.pone.0078433. https://doi.org/10.1371/journal.pone.0078433.
Language models are unsupervised multitask learners. 2019. https://d4mucfpksywv.cloudfront.net/better-language-models/language-models.pdf.
Societal biases in language generation: Progress and challenges. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4275–4293, Online, August 2021. Association for Computational Linguistics. doi: 10.18653/v1/2021.acl-long.330. https://aclanthology.org/2021.acl-long.330.
AutoPrompt: Eliciting knowledge from language models with automatically generated prompts. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235, Online, November 2020. Association for Computational Linguistics. doi: 10.18653/v1/2020.emnlp-main.346. https://aclanthology.org/2020.emnlp-main.346.
Wikipedia. Wikipedia:Systemic bias — Wikipedia, the free encyclopedia. http://en.wikipedia.org/w/index.php?title=Wikipedia%3ASystemic%20bias&oldid=1102157003, 2022. [Online; accessed 30-August-2022].