Papers
Topics
Authors
Recent
2000 character limit reached

Abstract Visual Reasoning Enabled by Language (2303.04091v3)

Published 7 Mar 2023 in cs.AI, cs.CL, and cs.LG

Abstract: While AI models have achieved human or even superhuman performance in many well-defined applications, they still struggle to show signs of broad and flexible intelligence. The Abstraction and Reasoning Corpus (ARC), a visual intelligence benchmark introduced by François Chollet, aims to assess how close AI systems are to human-like cognitive abilities. Most current approaches rely on carefully handcrafted domain-specific program searches to brute-force solutions for the tasks present in ARC. In this work, we propose a general learning-based framework for solving ARC. It is centered on transforming tasks from the vision to the language domain. This composition of language and vision allows for pre-trained models to be leveraged at each stage, enabling a shift from handcrafted priors towards the learned priors of the models. While not yet beating state-of-the-art models on ARC, we demonstrate the potential of our approach, for instance, by solving some ARC tasks that have not been solved previously.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (16)
  1. Communicating natural programs to humans and machines, 2021.
  2. Roderic Guigo Corominas Alejandro de Miquel, Yuji Ariyasu. Arc kaggle competition, 2020.
  3. Neural-guided, bidirectional program search for abstraction and reasoning, 2021.
  4. Object-centric compositional imagination for visual abstract reasoning. In ICLR2022 Workshop on the Elements of Reasoning: Objects, Structure and Causality, 2022.
  5. Language models are few-shot learners, 2020.
  6. François Chollet. On the measure of intelligence, 2019.
  7. Abstraction and reasoning challenge, 2020.
  8. Vlad Golubev Ilia Larchenko. Abstract reasoning, 2020.
  9. Fast and flexible: Human program induction in abstract reasoning tasks, 2021.
  10. Lab42. Arc abstraction & reasoning corpus, 2022.
  11. Grounding language for transfer in deep reinforcement learning. Journal of Artificial Intelligence Research, 63:849–874, 2018.
  12. Bloom: A 176b-parameter open-access multilingual language model. arXiv preprint arXiv:2211.05100, 2022.
  13. Core knowledge. Dev. Sci., 10(1):89–96, Jan. 2007.
  14. Johan Sokrates Wind. Dsl solution to the arc challenge, 2020.
  15. Graphs, constraints, and search for the abstraction and reasoning corpus, 2022.
  16. Virel: Unsupervised visual relations discovery with graph-level analogy, 2022.
Citations (10)

Summary

  • The paper introduces a novel framework that converts ARC visual tasks into text descriptions, leveraging a modular vision and language approach.
  • It employs pre-trained LLMs in a zero-shot setting, demonstrating a log-linear accuracy improvement with model size compared to DSL methods.
  • The research highlights that language-based reasoning can effectively address novel visual challenges, paving the way for more adaptive AI systems.

Abstract Visual Reasoning Enabled by Language

Introduction

The problem of designing AI systems capable of human-like abstract and flexible intelligence remains a formidable challenge. The Abstraction and Reasoning Corpus (ARC), established by François Chollet, serves as a benchmark for evaluating such capabilities by requiring AI systems to solve visual tasks with minimal training data. Traditional approaches to ARC have predominantly relied on manually crafted domain-specific languages (DSLs) to generate programmatic solutions. This paper introduces a novel learning-based framework aimed at solving ARC by transcending these manual methods through a transformation of tasks from the visual to the language domain, thus leveraging the capabilities of pre-trained models. Figure 1

Figure 1: Each ARC task consists of pairs of input and output images that describe the task. To solve the task, the missing output image corresponding to the given test input must be predicted.

Approach

The research proposes a modular approach consisting of both vision and language modules. The vision module applies heuristic methods to attribute labels to objects within the task images, converting them into textual descriptions. This stage involves recognizing objects of interest, differentiating them by characteristics such as geometry, symmetry, and persistence, and using pre-defined descriptions for each. Meanwhile, LLMs, specifically LLMs, are employed in a black-box fashion to process these descriptions and predict missing outputs by generating text-based task solutions. Figure 2

Figure 2: The original tasks are given as image pairs in the visual domain (red) and are then transformed to the text domain (blue) using the encoder. The textual form of the task description allows a LLM to propose a solution which is decoded back to the visual domain.

Experiments and Results

Numerous LLMs such as different versions of Bloom and GPT-3 were tested for their ability to solve these transformed tasks in a zero-shot setting. The outcomes indicate a positive correlation between model size and performance, evidenced by a log-linear increase in accuracy relative to parameter count. This demonstrates the potential of LLMs to implicitly learn and apply human-aligned priors without explicit task-specific training. Figure 3

Figure 3

Figure 3

Figure 3

Figure 3: 995c5fa3.json

Comparative Analysis

Unlike hand-crafted DSLs, which can accurately capture fixed priors but struggle with novel scenarios, the proposed language-driven framework shows promise in solving tasks that have been previously unsolvable by top-performing competition entries. This inherently supports the notion that pre-trained LLMs can adaptively leverage learned priors, enabling expanded problem-solving capabilities compared to their manually encoded counterparts.

Conclusion

This research outlines a transformative approach to solving ARC by marrying visual abstraction with language reasoning. The initial results highlight the framework's potential to advance abstract reasoning in AI by leveraging learned models over rigid programming languages. Future work could further explore enhancing the approach by fine-tuning models or integrating learning-based vision modules, fostering an end-to-end, adaptive problem-solving pipeline. The implications for AI research are profound, suggesting a path forward where LLMs assume more autonomous roles in complex cognitive tasks.

Slide Deck Streamline Icon: https://streamlinehq.com

Whiteboard

Dice Question Streamline Icon: https://streamlinehq.com

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.