Emergent Mind

On the Measure of Intelligence

(1911.01547)
Published Nov 5, 2019 in cs.AI

Abstract

To make deliberate progress towards more intelligent and more human-like artificial systems, we need to be following an appropriate feedback signal: we need to be able to define and evaluate intelligence in a way that enables comparisons between two systems, as well as comparisons with humans. Over the past hundred years, there has been an abundance of attempts to define and measure intelligence, across both the fields of psychology and AI. We summarize and critically assess these definitions and evaluation approaches, while making apparent the two historical conceptions of intelligence that have implicitly guided them. We note that in practice, the contemporary AI community still gravitates towards benchmarking intelligence by comparing the skill exhibited by AIs and humans at specific tasks such as board games and video games. We argue that solely measuring skill at any given task falls short of measuring intelligence, because skill is heavily modulated by prior knowledge and experience: unlimited priors or unlimited training data allow experimenters to "buy" arbitrary levels of skills for a system, in a way that masks the system's own generalization power. We then articulate a new formal definition of intelligence based on Algorithmic Information Theory, describing intelligence as skill-acquisition efficiency and highlighting the concepts of scope, generalization difficulty, priors, and experience. Using this definition, we propose a set of guidelines for what a general AI benchmark should look like. Finally, we present a benchmark closely following these guidelines, the Abstraction and Reasoning Corpus (ARC), built upon an explicit set of priors designed to be as close as possible to innate human priors. We argue that ARC can be used to measure a human-like form of general fluid intelligence and that it enables fair general intelligence comparisons between AI systems and humans.

Hierarchical model maps cognitive abilities to generalization spectrum.

Overview

  • The paper critiques current AI intelligence measures, advocating for a definition that emphasizes adaptability, problem-solving, and learning across diverse tasks.

  • A new framework based on Algorithmic Information Theory is proposed, redefining intelligence as skill-acquisition efficiency and the ability to generalize from limited data.

  • The Abstraction and Reasoning Corpus (ARC) is introduced as a benchmark to test AI systems' general intelligence, focusing on novelty, generalization, and human-like cognition.

  • This revised definition and the ARC benchmark aim to advance AI research towards systems with genuine learning, adaptability, and comprehensive intelligence capabilities.

Unveiling the Complexity of Measuring Intelligence in AI Systems: A Deep Dive into the "On the Measure of Intelligence" Paper

Understanding the Need for a New Definition of Intelligence

The field of AI has long been driven by the goal of replicating or surpassing human intelligence. However, achieving this overarching goal requires a clear understanding and definition of intelligence that not only caters to the human perspective but also applies to AI systems. Historically, the AI community has primarily focused on enhancing task-specific skills, underestimating the importance of broad abilities and general intelligence. The paper presents a critical examination of existing definitions and approaches to measuring intelligence, highlighting the limitations of skill-based evaluations and the necessity for a measure that embodies the essence of true intelligence – the ability to efficiently acquire and apply knowledge and skills across diverse sets of challenges.

A Fresh Perspective on Intelligence

Drawing upon Algorithmic Information Theory, the paper introduces a comprehensive framework to redefine intelligence in AI systems. Intelligence is conceptualized as skill-acquisition efficiency across a scope of tasks, emphasizing the ability to generalize from limited data and prior experience. This definition shifts the focus from mere skill proficiency towards a more holistic view of intelligence that includes adaptability, problem-solving, and learning efficiency. The framework meticulously outlines the parameters of this new definition, including the critical roles of priors (innate knowledge), experience, and generalization difficulty, and its implications for AI research and evaluation.

The Abstraction and Reasoning Corpus (ARC) as a Benchmark

To operationalize this refined understanding of intelligence, the paper introduces the Abstraction and Reasoning Corpus (ARC) - a benchmark designed to rigorously test the general intelligence of AI systems. The ARC challenges AI systems with a range of abstract tasks that require understanding and applying core knowledge principles, mirroring the foundation of human cognitive abilities. This novel benchmark adheres to the stringent criteria outlined for a fair and effective intelligence evaluation tool; it emphasizes novelty, generalization, and human-like prior knowledge, aiming to facilitate a direct comparison between human and machine intelligence.

Implications and the Road Ahead

The introduction of this new definition of intelligence and the ARC benchmark represents a paradigm shift in AI research and evaluation. It encourages the development of AI systems not just with specialized abilities but with the genuine capacity for learning, adaptability, and problem-solving. The framework and ARC set the stage for future investigations into AI’s potential to achieve a form of intelligence that is both broad and deep, fostering advancements toward truly intelligent systems.

As AI continues to evolve, this paper's insights challenge researchers to think beyond conventional metrics of success and skill acquisition. By redefining intelligence in a way that captures the essence of cognitive flexibility and efficiency, the paper lays the groundwork for the next generation of AI systems. These systems would not only excel at specific tasks but also demonstrate a human-like ability to navigate the complexities of the real world. The journey toward achieving general AI is complex and fraught with challenges, but with comprehensive frameworks like the one proposed, the field is poised to make significant strides.

Subscribe by Email

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.

YouTube
HackerNews
The Measure of Intelligence (16 points, 1 comment)