Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Light Up the Shadows: Enhance Long-Tailed Entity Grounding with Concept-Guided Vision-Language Models (2406.10902v1)

Published 16 Jun 2024 in cs.CV and cs.CL

Abstract: Multi-Modal Knowledge Graphs (MMKGs) have proven valuable for various downstream tasks. However, scaling them up is challenging because building large-scale MMKGs often introduces mismatched images (i.e., noise). Most entities in KGs belong to the long tail, meaning there are few images of them available online. This scarcity makes it difficult to determine whether a found image matches the entity. To address this, we draw on the Triangle of Reference Theory and suggest enhancing vision-LLMs with concept guidance. Specifically, we introduce COG, a two-stage framework with COncept-Guided vision-LLMs. The framework comprises a Concept Integration module, which effectively identifies image-text pairs of long-tailed entities, and an Evidence Fusion module, which offers explainability and enables human verification. To demonstrate the effectiveness of COG, we create a dataset of 25k image-text pairs of long-tailed entities. Our comprehensive experiments show that COG not only improves the accuracy of recognizing long-tailed image-text pairs compared to baselines but also offers flexibility and explainability.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Yikai Zhang (41 papers)
  2. Qianyu He (26 papers)
  3. Xintao Wang (132 papers)
  4. Siyu Yuan (46 papers)
  5. Jiaqing Liang (62 papers)
  6. Yanghua Xiao (151 papers)

Summary

We haven't generated a summary for this paper yet.