Images in Language Space: Exploring the Suitability of Large Language Models for Vision & Language Tasks (2305.13782v1)
Abstract: LLMs have demonstrated robust performance on various language tasks using zero-shot or few-shot learning paradigms. While being actively researched, multimodal models that can additionally handle images as input have yet to catch up in size and generality with language-only models. In this work, we ask whether language-only models can be utilised for tasks that require visual input -- but also, as we argue, often require a strong reasoning component. Similar to some recent related work, we make visual information accessible to the LLM using separate verbalisation models. Specifically, we investigate the performance of open-source, open-access LLMs against GPT-3 on five vision-language tasks when given textually-encoded visual information. Our results suggest that LLMs are effective for solving vision-language tasks even with limited samples. This approach also enhances the interpretability of a model's output by providing a means of tracing the output back through the verbalised image content.
Collections
Sign up for free to add this paper to one or more collections.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.