Emergent Mind

Thousands of AI Authors on the Future of AI

(2401.02843)
Published Jan 5, 2024 in cs.CY , cs.AI , and cs.LG

Abstract

In the largest survey of its kind, 2,778 researchers who had published in top-tier AI venues gave predictions on the pace of AI progress and the nature and impacts of advanced AI systems The aggregate forecasts give at least a 50% chance of AI systems achieving several milestones by 2028, including autonomously constructing a payment processing site from scratch, creating a song indistinguishable from a new song by a popular musician, and autonomously downloading and fine-tuning a large language model. If science continues undisrupted, the chance of unaided machines outperforming humans in every possible task was estimated at 10% by 2027, and 50% by 2047. The latter estimate is 13 years earlier than that reached in a similar survey we conducted only one year earlier [Grace et al., 2022]. However, the chance of all human occupations becoming fully automatable was forecast to reach 10% by 2037, and 50% as late as 2116 (compared to 2164 in the 2022 survey). Most respondents expressed substantial uncertainty about the long-term value of AI progress: While 68.3% thought good outcomes from superhuman AI are more likely than bad, of these net optimists 48% gave at least a 5% chance of extremely bad outcomes such as human extinction, and 59% of net pessimists gave 5% or more to extremely good outcomes. Between 38% and 51% of respondents gave at least a 10% chance to advanced AI leading to outcomes as bad as human extinction. More than half suggested that "substantial" or "extreme" concern is warranted about six different AI-related scenarios, including misinformation, authoritarian control, and inequality. There was disagreement about whether faster or slower AI progress would be better for the future of humanity. However, there was broad agreement that research aimed at minimizing potential risks from AI systems ought to be prioritized more.

70% favor increasing priority on AI safety research; opinions unchanged since 2022 survey.

Overview

  • The paper discusses results from a significant survey of 2,778 AI researchers on the future of AI and its social impact.

  • The 2023 Expert Survey on Progress in AI (ESPAI) included participants from six major AI conferences.

  • Forecasted milestones include AI autonomously building a payment site and composing songs by 2028, with AI potentially surpassing human ability in every task by 2047.

  • While there is optimism about AI's potential, researchers also recognize substantial risks, including misinformation and authoritarian misuse.

  • The survey highlights the need for prioritizing research into AI safety, ethics, and governance.

Introduction

The trajectory of AI is a subject of global significance, impacting various decision-making processes in the public sector, private industry, and academia. While the future of AI is hotly debated, there is no consensus among experts. In this light, a substantial survey was conducted to gain insight from AI researchers into predictions on AI progress and its potential social consequences. The survey encompassed 2,778 AI researchers from leading conferences and is part of a series of inquiries into experts' expectations about AI development.

Survey Scope and Methodology

The 2023 Expert Survey on Progress in AI (ESPAI) included researchers from an expanded set of six top AI conferences, marking a significant increase in the number of contributors compared to the previous year's survey. The questionnaire solicited responses via multiple-choice, probability estimates, and future year projections, aiming to probe the nature of future AI systems and the potential risks they may pose. To manage framing effects, questions were designed with subtle differences and distributed randomly among participants.

Results on AI Progress

According to the aggregated forecasts, there is a 50% chance that by 2028 AI systems could autonomously build a payment processing site, compose songs indistinguishable from popular musicians, and independently download and refine a large language model. The researchers anticipate that AI could outperform humans in every task by as early 2047, a prediction that has moved 13 years closer than in the prior year's survey. These predictions reflect both increasing optimism for the potential of AI and highlight an advancing timeline for achieving significant milestones.

Social Impacts and Concerns

When it comes to the social consequences of AI, the surveyed researchers shared a mix of optimism and caution. While the majority indicated a likelihood of positive outcomes, a notable share also acknowledged a significant risk of extremely negative scenarios, including the possibility of human extinction. More than half the respondents recommended "substantial" or "extreme" levels of concern for six AI-related risks, such as the spread of misinformation and authoritarian control. Disparities also emerged on the preferred pace of AI development, emphasizing a need for greater prioritization of research into reducing potential AI risks.

This survey represents one of the most comprehensive inquiries into the anticipations of AI researchers. It not only sheds light on the expected advancements in AI capabilities but also underscores the urgency to address the ethical, safety, and governance challenges posed by these rapidly developing technologies.

Create an account to read this summary for free:

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.

YouTube
HackerNews
Reddit
Thousands of AI Authors on the Future of AI (23 points, 2 comments) in /r/singularity