SciFIBench: Benchmarking Large Multimodal Models for Scientific Figure Interpretation (2405.08807v2)
Abstract: Large multimodal models (LMMs) have proven flexible and generalisable across many tasks and fields. Although they have strong potential to aid scientific research, their capabilities in this domain are not well characterised. A key aspect of scientific research is the ability to understand and interpret figures, which serve as a rich, compressed source of complex information. In this work, we present SciFIBench, a scientific figure interpretation benchmark consisting of 2000 questions split between two tasks across 8 categories. The questions are curated from arXiv paper figures and captions, using adversarial filtering to find hard negatives and human verification for quality control. We evaluate 28 LMMs on SciFIBench, finding it to be a challenging benchmark. Finally, we investigate the alignment and reasoning faithfulness of the LMMs on augmented question sets from our benchmark. We release SciFIBench to encourage progress in this domain.
- 01-ai: Yi. https://https://github.com/01-ai/Yi (2023)
- Anthropic: Introducing the next generation of Claude (Mar 2024), https://www.anthropic.com/news/claude-3-family
- Contributors, O.: OpenCompass: A Universal Evaluation Platform for Foundation Models. https://github.com/open-compass/opencompass (2023)
- OpenAI: GPT-4 Technical Report (2023)
- OpenAI: GPT-4V(ision) System Card (2023), https://cdn.openai.com/papers/GPTV_System_Card.pdf
- OpenAI: Hello GPT-4o (May 2024), https://openai.com/index/hello-gpt-4o/
- OpenBMB: OmniLMM. https://https://github.com/OpenBMB/OmniLMM (2024)
- PCIResearch: TransCore-M. https://github.com/PCIResearch/TransCore-M (2023)
- Team, I.: InternLM: A Multilingual Language Model with Progressively Enhanced Capabilities. https://github.com/InternLM/InternLM (2023)
Collections
Sign up for free to add this paper to one or more collections.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.