Thousands of AI Authors on the Future of AI (2401.02843v2)
Abstract: In the largest survey of its kind, 2,778 researchers who had published in top-tier AI venues gave predictions on the pace of AI progress and the nature and impacts of advanced AI systems The aggregate forecasts give at least a 50% chance of AI systems achieving several milestones by 2028, including autonomously constructing a payment processing site from scratch, creating a song indistinguishable from a new song by a popular musician, and autonomously downloading and fine-tuning a LLM. If science continues undisrupted, the chance of unaided machines outperforming humans in every possible task was estimated at 10% by 2027, and 50% by 2047. The latter estimate is 13 years earlier than that reached in a similar survey we conducted only one year earlier [Grace et al., 2022]. However, the chance of all human occupations becoming fully automatable was forecast to reach 10% by 2037, and 50% as late as 2116 (compared to 2164 in the 2022 survey). Most respondents expressed substantial uncertainty about the long-term value of AI progress: While 68.3% thought good outcomes from superhuman AI are more likely than bad, of these net optimists 48% gave at least a 5% chance of extremely bad outcomes such as human extinction, and 59% of net pessimists gave 5% or more to extremely good outcomes. Between 38% and 51% of respondents gave at least a 10% chance to advanced AI leading to outcomes as bad as human extinction. More than half suggested that "substantial" or "extreme" concern is warranted about six different AI-related scenarios, including misinformation, authoritarian control, and inequality. There was disagreement about whether faster or slower AI progress would be better for the future of humanity. However, there was broad agreement that research aimed at minimizing potential risks from AI systems ought to be prioritized more.
- 2022 Expert Survey on Progress in AI. AI Impacts, Aug 2022. URL https://wiki.aiimpacts.org/ai_timelines/predictions_of_human-level_ai_timelines/ai_timeline_surveys/2022_expert_survey_on_progress_in_ai.
- OpenAI. Moving AI governance forward, Jul 2023. URL https://openai.com/blog/moving-ai-governance-forward.
- Center for Human-compatible Artificial Intelligence. Research Publications – Center for Human-Compatible Artificial Intelligence, 2023. URL https://humancompatible.ai/research.
- Gavin Newsom. EXECUTIVE ORDER N-12-23, Sep 2023. URL https://www.gov.ca.gov/wp-content/uploads/2023/09/AI-EO-No.12-_-GGN-Signed.pdf.
- AI.gov. Making AI Work for the American People, 2023. URL https://ai.gov.
- Inter-Agency Working Group on Artificial Intelligence. Principles for the Ethical Use of Artificial Intelligence in the United Nations System (Advanced unedited version), September 2022.
- Views of prominent ai developers on risk from ai, 2023. URL https://wiki.aiimpacts.org/arguments_for_ai_risk/views_of_ai_developers_on_risk_from_ai.
- Pablo Villalobos. Scaling laws literature review, 2023. URL https://epochai.org/blog/scaling-laws-literature-review. Accessed: 2023-12-22.
- Discontinuous Progres Investigation. Technical report, AI Impacts, 2021. URL https://wiki.aiimpacts.org/ai_timelines/discontinuous_progress_investigation.
- Stephen M. Omohundro. The basic ai drives. In Proceedings of the 2008 Conference on Artificial General Intelligence 2008: Proceedings of the First AGI Conference, page 483–492, NLD, 2008. IOS Press. ISBN 9781586038335.
- Ai deception: A survey of examples, risks, and potential solutions. arXiv preprint arXiv:2308.14752, 2023.
- Charles I Jones. The ai dilemma: Growth versus existential risk. Technical report, National Bureau of Economic Research, 2023.
- Economic growth under transformative ai. Technical report, National Bureau of Economic Research, 2023.
- Future of Life Institute. Pause Giant AI Experiments: An Open Letter, 2023. URL https://futureoflife.org/open-letter/pause-giant-ai-experiments/.
- Center for AI Safety. Statement on AI Risk: AI experts and public figures express their concerns about AI risk., 2023. URL https://www.safe.ai/statement-on-ai-risk.
- Joseph R Biden. Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, 2023. URL https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/.
- Dario Amodei. Written testimony of dario amodei, ph.d. co-founder and ceo, anthropic; for a hearing on “oversight of a.i.: Principles for regulation”; before the judiciary committee subcommittee on privacy, technology, and the law; united states senate, Jul 2023. URL https://www.judiciary.senate.gov/imo/media/doc/2023-07-26_-_testimony_-_amodei.pdf.
- gov.uk. AI Safety Summit 2023 - GOV.UK, November 2023. URL https://www.gov.uk/government/topical-events/ai-safety-summit-2023.
- European Parliament. EU AI Act: first regulation on artificial intelligence. Accessed June, 25:2023, 2023. URL https://www.europarl.europa.eu/news/en/headlines/society/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence.
- When will ai exceed human performance? evidence from ai experts. Journal of Artificial Intelligence Research, 62:729–754, 2018a.
- Zachary Stein-Perlman. Surveys of US public opinion on AI, 2023. URL https://wiki.aiimpacts.org/responses_to_ai/public_opinion_on_ai/surveys_of_public_opinion_on_ai/surveys_of_us_public_opinion_on_ai.
- The state of AI in 2023: Generative AI’s breakout year. Technical report, McKinsey & Company, 2023.
- AI Impacts. 2023 Expert Survey on Progress in AI [Survey PDF], 2023a. URL https://wiki.aiimpacts.org/_media/ai_timelines/predictions_of_human-level_ai_timelines/ai_timeline_surveys/2023_espai_paid.pdf.
- The framing of decisions and the psychology of choice. science, 211(4481):453–458, 1981.
- AI Impacts. 2023 Expert Survey on Progress in AI [AI Impacts Wiki], 2023b. URL https://wiki.aiimpacts.org/ai_timelines/predictions_of_human-level_ai_timelines/ai_timeline_surveys/2023_expert_survey_on_progress_in_ai.
- O*NET. All Job Family Occupations, 2023. URL https://www.onetonline.org/find/family?f=0.
- Joseph Carlsmith. Is Power-Seeking AI an Existential Risk? arXiv preprint arXiv:2206.13353, 2022.
- Stuart Russell. Of myths and moonshine, 2014. URL https://www.edge.org/conversation/the-myth-of-ai#26015.
- Philip E. Tetlock. Expert Political Judgment: How Good Is It? How Can We Know? Princeton University Press, Princeton, 2005. ISBN 9781400888818. doi:10.1515/9781400888818. URL https://doi.org/10.1515/9781400888818.
- A strategy to improve expert technology forecasts. Proceedings of the National Academy of Sciences, 118(21):e2021558118, 2021.
- James Surowiecki. The Wisdom of Crowds: Why the many are smarter than the few and how collective wisdom shapes business, Eeonomies, societies and nations. Doubleday & Co., 2004.
- Forecasting existential risks: Evidence from a long-run forecasting tournament, 2023.
- Michael Braun Hamilton. Online survey response rates and times: Background and guidance for industry. Tercent, Inc, 2003.
- National Research Council (US) Committee on Assessing Fundamental Attitudes of Life Scientists as a Basis for Biosecurity Education. A survey of attitudes and actions on dual use research in the life sciences: A collaborative effort of the national research council and the american association for the advancement of science. Technical report, American Association for the Advancement of Science and National Research Council and others, 2009.
- 2023 Expert Survey on Progress in AI, Oct 2023. URL https://osf.io/8gzdr.
- AI Impacts. AI Timeline Surveys, 2023c. URL https://wiki.aiimpacts.org/ai_timelines/predictions_of_human-level_ai_timelines/ai_timeline_surveys/ai_timeline_surveys.
- Artificial intelligence and economic growth. Technical report, National Bureau of Economic Research, 2017.
- Dennis Bray and Hans von Storch. Climate science: An empirical example of postnormal science. Bulletin of the American Meteorological Society, 80(3):439–456, 1999.
- CliSci2008: A survey of the perspectives of climate scientists concerning climate science and climate change. GKSS-Forschungszentrum Geesthacht Geesthacht, 2010.
- Examining the scientific consensus on climate change. Eos, Transactions American Geophysical Union, 90(3):22–23, 2009.
- Viewpoint: when will AI exceed human performance? Evidence from AI Experts. Journal of Artificial Intelligence Research, pages 729–754, 2018b. doi:10.1613/jair.1.11222.
- The causes and consequences of response rates in surveys by the news media and government contractor survey research firms. Advances in telephone survey methodology, pages 499–528, 2007.
- Thomas R Stewart. Scientists’ uncertainty and disagreement about global climate change: A psychological perspective. International Journal of Psychology, 26(5):565–573, 1991.
- Extensional versus intuitive reasoning: The conjunction fallacy in probability judgment. Psychological review, 90(4):293, 1983.
- The future prospects of energy technologies: Insights from expert elicitations. Review of Environmental Economics and Policy, 12(1), 2016.
- Mail surveys for election forecasting? an evaluation of the columbus dispatch poll. Public Opinion Quarterly, 60(2):181–227, 1996.
- Expert elicitation survey predicts 37% to 49% declines in wind energy costs by 2050. Nature Energy, 6(5):555–565, 2021.
- Forecasting ai progress: Evidence from a survey of machine learning researchers. arXiv preprint arXiv:2206.04132, 2022.