LLaVA-Grounding: Grounded Visual Chat with Large Multimodal Models (2312.02949v1)
Abstract: With the recent significant advancements in large multi-modal models (LMMs), the importance of their grounding capability in visual chat is increasingly recognized. Despite recent efforts to enable LMMs to support grounding, their capabilities for grounding and chat are usually separate, and their chat performance drops dramatically when asked to ground. The problem is the lack of a dataset for grounded visual chat (GVC). Existing grounding datasets only contain short captions. To address this issue, we have created GVC data that allows for the combination of grounding and chat capabilities. To better evaluate the GVC capabilities, we have introduced a benchmark called Grounding-Bench. Additionally, we have proposed a model design that can support GVC and various types of visual prompts by connecting segmentation models with LLMs. Experimental results demonstrate that our model outperforms other LMMs on Grounding-Bench. Furthermore, our model achieves competitive performance on classic grounding benchmarks like RefCOCO/+/g and Flickr30K Entities. Our code will be released at https://github.com/UX-Decoder/LLaVA-Grounding .
- Position-enhanced visual instruction tuning for multimodal large language models, 2023a.
- Minigpt-v2: Large language model as a unified interface for vision-language multi-task learning. arXiv:2310.09478, 2023b.
- Shikra: Unleashing multimodal llm’s referential dialogue magic. arXiv preprint arXiv:2306.15195, 2023c.
- Transvg: End-to-end visual grounding with transformers, 2022.
- A unified mutual supervision framework for referring expression segmentation and generation, 2022.
- Scaling up visual and vision-language representation learning with noisy text supervision. arXiv preprint arXiv:2102.05918, 2021.
- Mdetr – modulated detection for end-to-end multi-modal understanding, 2021.
- Referitgame: Referring to objects in photographs of natural scenes. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pages 787–798, 2014.
- Visual genome: Connecting language and vision using crowdsourced dense image annotations, 2016.
- Lisa: Reasoning segmentation via large language model. arXiv preprint arXiv:2308.00692, 2023.
- Otter: A multi-modal model with in-context instruction tuning. arXiv preprint arXiv:2305.03726, 2023a.
- Llava-med: Training a large language-and-vision assistant for biomedicine in one day. arXiv preprint arXiv:2306.00890, 2023b.
- Grounded language-image pre-training. In CVPR, 2022.
- Microsoft COCO: Common objects in context. In ECCV, 2014.
- Gres: Generalized referring expression segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 23592–23601, 2023a.
- Improved baselines with visual instruction tuning, 2023b.
- Visual instruction tuning. arXiv preprint arXiv:2304.08485, 2023c.
- Polyformer: Referring image segmentation as sequential polygon generation, 2023d.
- Llava-plus: Learning to use tools for creating multimodal agents, 2023e.
- Grounding dino: Marrying dino with grounded pre-training for open-set object detection. arXiv preprint arXiv:2303.05499, 2023f.
- Interngpt: Solving vision-centric tasks by interacting with chatgpt beyond language, 2023g.
- Multi-task collaborative network for joint referring expression comprehension and segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020.
- Towards language-guided visual recognition via dynamic convolutions. International Journal of Computer Vision, pages 1–19, 2023.
- OpenAI. Gpt-4 technical report, 2023a.
- OpenAI. Gpt-4v(ision) system card. https://cdn.openai.com/papers/GPTV_System_Card.pdf, 2023b.
- OpenAI. Gpt-4 technical report, 2023c.
- Kosmos-2: Grounding multimodal large language models to the world. arXiv preprint arXiv:2306.14824, 2023.
- Flickr30k entities: Collecting region-to-phrase correspondences for richer image-to-sentence models. In ICCV, 2015.
- Learning transferable visual models from natural language supervision. arXiv preprint arXiv:2103.00020, 2021.
- Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971, 2023.
- Visionllm: Large language model is also an open-ended decoder for vision-centric tasks, 2023a.
- Cogvlm: Visual expert for pretrained language models. 2023b.
- Universal instance perception as object discovery and retrieval. In CVPR, 2023.
- Set-of-mark prompting unleashes extraordinary visual grounding in gpt-4v, 2023.
- Unitab: Unifying text and box outputs for grounded vision-language modeling, 2022.
- Detclip: Dictionary-enriched visual-concept paralleled pre-training for open-world detection, 2022.
- Detclipv2: Scalable open-vocabulary object detection pre-training via word-region alignment, 2023.
- mplug-docowl: Modularized multimodal large language model for document understanding. arXiv preprint arXiv:2307.02499, 2023.
- Ferret: Refer and ground anything anywhere at any granularity, 2023.
- Modeling context in referring expressions, 2016.
- Dino: Detr with improved denoising anchor boxes for end-to-end object detection, 2022a.
- Glipv2: Unifying localization and vision-language understanding. arXiv preprint arXiv:2206.05836, 2022b.
- A simple framework for open-vocabulary segmentation and detection. arXiv preprint arXiv:2303.08131, 2023a.
- Llama-adapter: Efficient fine-tuning of language models with zero-init attention, 2023b.
- Gpt4roi: Instruction tuning large language model on region-of-interest. arXiv preprint arXiv:2307.03601, 2023c.
- Bubogpt: Enabling visual grounding in multi-modal llms. arXiv preprint arXiv:2307.08581, 2023.
- Seqtr: A simple yet universal network for visual grounding. In Computer Vision–ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part XXXV, pages 598–615. Springer, 2022.
- Minigpt-4: Enhancing vision-language understanding with advanced large language models. arXiv preprint arXiv:2304.10592, 2023.