VALOR-EVAL: Holistic Coverage and Faithfulness Evaluation of Large Vision-Language Models (2404.13874v4)
Abstract: Large Vision-LLMs (LVLMs) suffer from hallucination issues, wherein the models generate plausible-sounding but factually incorrect outputs, undermining their reliability. A comprehensive quantitative evaluation is necessary to identify and understand the extent of hallucinations in these models. However, existing benchmarks are often limited in scope, focusing mainly on object hallucinations. Furthermore, current evaluation methods struggle to effectively address the subtle semantic distinctions between model outputs and reference data, as well as the balance between hallucination and informativeness. To address these issues, we introduce a multi-dimensional benchmark covering objects, attributes, and relations, with challenging images selected based on associative biases. Moreover, we propose a LLM-based two-stage evaluation framework that generalizes the popular CHAIR metric and incorporates both faithfulness and coverage into the evaluation. Experiments on 10 established LVLMs demonstrate that our evaluation metric is more comprehensive and better correlated with humans than existing work when evaluating on our challenging human-annotated benchmark dataset. Our work also highlights the critical balance between faithfulness and coverage of model outputs, and encourages future works to address hallucinations in LVLMs while keeping their outputs informative.
- Flamingo: a visual language model for few-shot learning. ArXiv preprint.
- Qwen-vl: A versatile vision-language model for understanding, localization, text reading, and beyond. ArXiv preprint.
- Minigpt-v2: large language model as a unified interface for vision-language multi-task learning. ArXiv preprint.
- Vicuna: An opensource chatbot impressing gpt-4 with 90% chatgpt quality. ArXiv preprint.
- Holistic analysis of hallucination in gpt-4v(ision): Bias and interference challenges. ArXiv preprint.
- Instructblip: Towards general-purpose vision-language models with instruction tuning. ArXiv preprint.
- Internlm-xcomposer2: Mastering free-form text-image composition and comprehension in vision-language large model. ArXiv preprint.
- Hallusionbench: An advanced diagnostic suite for entangled language hallucination & visual illusion in large vision-language models. ArXiv preprint.
- Detecting and preventing hallucinations in large vision language models. ArXiv preprint.
- Bliva: A simple multimodal llm for better handling of text-rich visual questions. In Thirty-Eighth AAAI Conference on Artificial Intelligence, AAAI 2024.
- From pixels to insights: A survey on automatic chart understanding in the era of large foundation models. ArXiv preprint.
- Embrace divergence for richer insights: A multi-document summarization benchmark and a case study on summarizing diverse information from news articles. ArXiv preprint.
- Do LVLMs understand charts? analyzing and correcting factual errors in chart captioning. ArXiv preprint.
- Drew A. Hudson and Christopher D. Manning. 2019. Gqa: a new dataset for compositional question answering over real-world images. ArXiv preprint.
- Faithscore: Evaluating hallucinations in large vision-language models. ArXiv preprint.
- Vilt: Vision-and-language transformer without convolution or region supervision. In Proc. of ICML.
- Blip-2: Bootstrapping language-image pre-training with frozen image encoders and large language models. In Proc. of ICML.
- Evaluating object hallucination in large vision-language models. In Proc. of EMNLP.
- Improved baselines with visual instruction tuning. ArXiv preprint.
- Visual instruction tuning. ArXiv preprint.
- Negative object presence evaluation (nope) to measure object hallucination in vision-language models. ArXiv preprint.
- OpenAI. 2023. Gpt-4 technical report.
- Gender biases in automatic evaluation metrics for image captioning. In Proc. of EMNLP.
- Object hallucination in image captioning. In Proc. of EMNLP.
- Generative multimodal models are in-context learners. ArXiv preprint.
- Llama: Open and efficient foundation language models. ArXiv preprint.
- Llama 2: Open foundation and fine-tuned chat models. ArXiv preprint.
- Behind the magic, merlim: Multi-modal evaluation benchmark for large image-language models. ArXiv preprint.
- Evaluation and analysis of hallucination in large vision-language models. ArXiv preprint.
- An llm-free multi-dimensional benchmark for mllms hallucination evaluation. ArXiv preprint.
- Cogvlm: Visual expert for pretrained language models. ArXiv preprint.
- mplug-owl: Modularization empowers large language models with multimodality. ArXiv preprint.
- mplug-owl2: Revolutionizing multi-modal large language model with modality collaboration. ArXiv preprint.
- Halle-switch: Rethinking and controlling object existence hallucinations in large vision language models for detailed caption. ArXiv preprint.
- Minigpt-4: Enhancing vision-language understanding with advanced large language models. ArXiv preprint.