Emergent Mind

Multimodal Analysis Of Google Bard And GPT-Vision: Experiments In Visual Reasoning

(2309.16705)
Published Aug 17, 2023 in cs.CV , cs.CL , and cs.LG

Abstract

Addressing the gap in understanding visual comprehension in LLMs, we designed a challenge-response study, subjecting Google Bard and GPT-Vision to 64 visual tasks, spanning categories like "Visual Situational Reasoning" and "Next Scene Prediction." Previous models, such as GPT4, leaned heavily on optical character recognition tools like Tesseract, whereas Bard and GPT-Vision, akin to Google Lens and Visual API, employ deep learning techniques for visual text recognition. However, our findings spotlight both vision-language model's limitations: while proficient in solving visual CAPTCHAs that stump ChatGPT alone, it falters in recreating visual elements like ASCII art or analyzing Tic Tac Toe grids, suggesting an over-reliance on educated visual guesses. The prediction problem based on visual inputs appears particularly challenging with no common-sense guesses for next-scene forecasting based on current "next-token" multimodal models. This study provides experimental insights into the current capacities and areas for improvement in multimodal LLMs.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.