TinyChart: Efficient Chart Understanding with Visual Token Merging and Program-of-Thoughts Learning (2404.16635v1)
Abstract: Charts are important for presenting and explaining complex data relationships. Recently, multimodal LLMs (MLLMs) have shown remarkable capabilities in various chart understanding tasks. However, the sheer size of these models in terms of parameters and computational requirements limits their use in resource-constrained environments. In this paper, we present TinyChart, an efficient MLLM for chart understanding with only 3B parameters. TinyChart overcomes two key challenges in efficient chart understanding: (1) reduce the burden of learning numerical computations through a Program-of-Thoughts (PoT) learning strategy, which trains the model to generate Python programs for numerical calculations, and (2) reduce lengthy vision feature sequences produced by the vision transformer for high-resolution images through a Vision Token Merging module, which gradually merges most similar vision tokens. Extensive experiments demonstrate that our 3B TinyChart achieves SOTA performance on a variety of chart understanding benchmarks including ChartQA, Chart-to-Text, Chart-to-Table, OpenCQA, and ChartX. It outperforms several chart understanding MLLM with up to 13B parameters such as ChartLlama and ChartAst, and close-sourced general-purpose MLLM GPT-4V on ChartQA. It also demonstrates its superior efficiency with higher throughput during inference due to a smaller model scale and more efficient vision encoding. Our code and model are available at https://github.com/X-PLUG/mPLUG-DocOwl/tree/main/TinyChart.
- Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond. arXiv:2308.12966 [cs.CV]
- Token Merging: Your ViT But Faster. In The Eleventh International Conference on Learning Representations. https://openreview.net/forum?id=JroZRaRw7Eu
- OneChart: Purify the Chart Structural Extraction via One Auxiliary Token. arXiv:2404.09987 [cs.CV]
- Program of Thoughts Prompting: Disentangling Computation from Reasoning for Numerical Reasoning Tasks. Transactions on Machine Learning Research (2023).
- InternLM-XComposer2: Mastering free-form text-image composition and comprehension in vision-language large model. arXiv preprint arXiv:2401.16420 (2024).
- InternLM-XComposer2-4KHD: A Pioneering Large Vision-Language Model Handling Resolutions from 336 Pixels to 4K HD. arXiv preprint arXiv:2404.06512 (2024).
- An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale. CoRR abs/2010.11929 (2020). arXiv:2010.11929 https://arxiv.org/abs/2010.11929
- DocPedia: Unleashing the Power of Large Multimodal Model in the Frequency Domain for Versatile Document Understanding. arXiv preprint arXiv:2311.11810 (2023).
- Chartstamp: Robust chart embedding for real-world applications. In Proceedings of the 30th ACM International Conference on Multimedia. 2786–2795.
- Chartllama: A multimodal llm for chart understanding and generation. arXiv preprint arXiv:2311.16483 (2023).
- Dan Hendrycks and Kevin Gimpel. 2016. Bridging Nonlinearities and Stochastic Regularizers with Gaussian Error Linear Units. CoRR abs/1606.08415 (2016). arXiv:1606.08415 http://arxiv.org/abs/1606.08415
- Cogagent: A visual language model for gui agents. arXiv preprint arXiv:2312.08914 (2023).
- Question-controlled Text-aware Image Captioning. In Proceedings of the 29th ACM International Conference on Multimedia (Virtual Event, China) (MM ’21). Association for Computing Machinery, New York, NY, USA, 3097–3105. https://doi.org/10.1145/3474085.3475452
- mPLUG-PaperOwl: Scientific Diagram Analysis with the Multimodal Large Language Model. arXiv:2311.18248 [cs.MM]
- mPLUG-DocOwl 1.5: Unified Structure Learning for OCR-free Document Understanding. arXiv:2403.12895 [cs.CV]
- From Pixels to Insights: A Survey on Automatic Chart Understanding in the Era of Large Foundation Models. arXiv:2403.12027 [cs.CL]
- OPERA: Alleviating Hallucination in Multi-Modal Large Language Models via Over-Trust Penalty and Retrospection-Allocation. arXiv:2311.17911 [cs.CV]
- Hallucination Augmented Contrastive Learning for Multimodal Large Language Model. arXiv:2312.06968 [cs.CV]
- Dvqa: Understanding data visualizations via question answering. In Proceedings of the IEEE conference on computer vision and pattern recognition. 5648–5656.
- OpenCQA: Open-ended Question Answering with Charts. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, Yoav Goldberg, Zornitsa Kozareva, and Yue Zhang (Eds.). Association for Computational Linguistics, Abu Dhabi, United Arab Emirates, 11817–11837. https://doi.org/10.18653/v1/2022.emnlp-main.811
- Chart-to-Text: A Large-Scale Benchmark for Chart Summarization. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Smaranda Muresan, Preslav Nakov, and Aline Villavicencio (Eds.). Association for Computational Linguistics, Dublin, Ireland, 4005–4023. https://doi.org/10.18653/v1/2022.acl-long.277
- Mitigating Object Hallucinations in Large Vision-Language Models through Visual Contrastive Decoding. arXiv:2311.16922 [cs.CV]
- Textbooks Are All You Need II: phi-1.5 technical report. arXiv:2309.05463 [cs.CL]
- Evaluating Object Hallucination in Large Vision-Language Models. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing. 292–305.
- SPHINX: The Joint Mixing of Weights, Tasks, and Visual Embeddings for Multi-modal Large Language Models. arXiv:2311.07575 [cs.CV]
- DePlot: One-shot visual language reasoning by plot-to-table translation. In Findings of the Association for Computational Linguistics: ACL 2023, Anna Rogers, Jordan Boyd-Graber, and Naoaki Okazaki (Eds.). Association for Computational Linguistics, Toronto, Canada, 10381–10399. https://doi.org/10.18653/v1/2023.findings-acl.660
- MatCha: Enhancing Visual Language Pretraining with Math Reasoning and Chart Derendering. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Anna Rogers, Jordan Boyd-Graber, and Naoaki Okazaki (Eds.). Association for Computational Linguistics, Toronto, Canada, 12756–12770. https://doi.org/10.18653/v1/2023.acl-long.714
- Mmc: Advancing multimodal chart understanding with large-scale instruction tuning. arXiv preprint arXiv:2311.10774 (2023).
- Improved Baselines with Visual Instruction Tuning. arXiv:2310.03744 [cs.CV]
- Visual instruction tuning. Advances in neural information processing systems 36 (2024).
- On the Hidden Mystery of OCR in Large Multimodal Models. arXiv:2305.07895 [cs.CV]
- ChartQA: A Benchmark for Question Answering about Charts with Visual and Logical Reasoning. In Findings of the Association for Computational Linguistics: ACL 2022. 2263–2279.
- UniChart: A Universal Vision-language Pretrained Model for Chart Comprehension and Reasoning. arXiv:2305.14761 [cs.CL]
- ChartInstruct: Instruction Tuning for Chart Comprehension and Reasoning. arXiv:2403.09028 [cs.CL]
- ChartAssisstant: A Universal Chart Multimodal Language Model via Chart-to-Table Pre-training and Multitask Instruction Tuning. arXiv preprint arXiv:2401.02384 (2024). arXiv:2401.02384
- Plotqa: Reasoning over scientific plots. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision. 1527–1536.
- Jason Obeid and Enamul Hoque. 2020. Chart-to-Text: Generating Natural Language Descriptions for Charts by Adapting the Transformer Model. CoRR abs/2010.09142 (2020). arXiv:2010.09142 https://arxiv.org/abs/2010.09142
- OpenAI. 2023a. GPT-3.5-Turbo. https://platform.openai.com/docs/models/gpt-3-5-turbo.
- OpenAI. 2023b. GPT-4 Technical Report. arXiv:2303.08774 [cs.CL]
- Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35 (2022), 27730–27744.
- Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting of the Association for Computational Linguistics. 311–318.
- ChartSumm: A Comprehensive Benchmark for Automatic Chart Summarization of Long and Short Summaries. Proceedings of the Canadian Conference on Artificial Intelligence (jun 5 2023). https://caiac.pubpub.org/pub/ujhjycsw.
- Object Hallucination in Image Captioning. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. 4035–4045.
- Towards VQA Models That Can Read. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
- VisText: A Benchmark for Semantically Rich Chart Captioning. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Anna Rogers, Jordan Boyd-Graber, and Naoaki Okazaki (Eds.). Association for Computational Linguistics, Toronto, Canada, 7268–7298. https://doi.org/10.18653/v1/2023.acl-long.401
- Gemini: a family of highly capable multimodal models. arXiv preprint arXiv:2312.11805 (2023).
- The NumPy Array: A Structure for Efficient Numerical Computation. Computing in Science & Engineering 13, 2 (2011), 22–30. https://doi.org/10.1109/MCSE.2011.37
- Attention is all you need. Advances in neural information processing systems 30 (2017).
- An llm-free multi-dimensional benchmark for mllms hallucination evaluation. arXiv preprint arXiv:2311.07397 (2023).
- Evaluation and Analysis of Hallucination in Large Vision-Language Models. arXiv:2308.15126 [cs.LG]
- ChartX & ChartVLM: A Versatile Benchmark and Foundation Model for Complicated Chart Reasoning. arXiv:2402.12185 [cs.CV]
- UReader: Universal OCR-free Visually-situated Language Understanding with Multimodal Large Language Model. In EMNLP (Findings). Association for Computational Linguistics, 2841–2858.
- mPLUG-Owl: Modularization Empowers Large Language Models with Multimodality. arXiv:2304.14178 [cs.CL]
- mPLUG-Octopus: The Versatile Assistant Empowered by A Modularized End-to-End Multimodal LLM. In Proceedings of the 31st ACM International Conference on Multimedia. 9365–9367.
- mPLUG-Owl2: Revolutionizing Multi-modal Large Language Model with Modality Collaboration. arXiv:2311.04257 [cs.CL]
- Less is More: Mitigating Multimodal Hallucination from an EOS Decision Perspective. arXiv preprint arXiv:2402.14545 (2024).
- Sigmoid loss for language image pre-training. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 11975–11986.
- MPMQA: multimodal question answering on product manuals. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 37. 13958–13966.
- Internlm-xcomposer: A vision-language large model for advanced text-image comprehension and composition. arXiv preprint arXiv:2309.15112 (2023).
- TRIE: end-to-end text reading and information extraction for document understanding. In Proceedings of the 28th ACM International Conference on Multimedia. 1413–1422.
- TinyLLaVA: A Framework of Small-scale Large Multimodal Models. arXiv:2402.14289 [cs.LG]