Emergent Mind

Human Learning about AI Performance

(2406.05408)
Published Jun 8, 2024 in econ.GN and q-fin.EC

Abstract

How do humans assess the performance of AI across different tasks? AI has been noted for its surprising ability to accomplish very complex tasks while failing seemingly trivial ones. We show that humans engage in ``performance anthropomorphism'' when assessing AI capabilities: they project onto AI the ability model that they use to assess humans. In this model, observing an agent fail an easy task is highly diagnostic of a low ability, making them unlikely to succeed at any harder task. Conversely, a success on a hard task makes successes on any easier task likely. We experimentally show that humans project this model onto AI. Both prior beliefs and belief updating about AI performance on standardized math questions appear consistent with the human ability model. This contrasts with actual AI performance, which is uncorrelated with human difficulty in our context, and makes such beliefs misspecified. Embedding our framework into an adoption model, we show that patterns of under- and over-adoption can be sustained in an equilibrium with anthropomorphic beliefs.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.