Automatic Layout Planning for Visually-Rich Documents with Instruction-Following Models (2404.15271v1)
Abstract: Recent advancements in instruction-following models have made user interactions with models more user-friendly and efficient, broadening their applicability. In graphic design, non-professional users often struggle to create visually appealing layouts due to limited skills and resources. In this work, we introduce a novel multimodal instruction-following framework for layout planning, allowing users to easily arrange visual elements into tailored layouts by specifying canvas size and design purpose, such as for book covers, posters, brochures, or menus. We developed three layout reasoning tasks to train the model in understanding and executing layout instructions. Experiments on two benchmarks show that our method not only simplifies the design process for non-professionals but also surpasses the performance of few-shot GPT-4V models, with mIoU higher by 12% on Crello. This progress highlights the potential of multimodal instruction-following models to automate and simplify the design process, providing an approachable solution for a wide range of design tasks on visually-rich documents.
- 2023. Chatgpt can now see, hear, and speak. https://openai.com/blog/chatgpt-can-now-see-hear-and-speak.
- Openflamingo: An open-source framework for training large autoregressive vision-language models. ArXiv, abs/2308.01390.
- Language models are few-shot learners. In Advances in Neural Information Processing Systems, volume 33, pages 1877–1901. Curran Associates, Inc.
- LayoutGPT: Compositional visual planning and generation with large language models. In Thirty-seventh Conference on Neural Information Processing Systems.
- Posterlayout: A new benchmark and approach for content-aware visual-textual presentation layout. 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 6018–6026.
- Towards flexible multi-modal document models. 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 14287–14296.
- Mimic-it: Multi-modal in-context instruction tuning. arXiv preprint arXiv:2306.05425.
- Otter: A multi-modal model with in-context instruction tuning. arXiv preprint arXiv:2305.03726.
- Blip-2: Bootstrapping language-image pre-training with frozen image encoders and large language models. In International Conference on Machine Learning.
- Layoutprompter: Awaken the design ability of large language models. In Thirty-seventh Conference on Neural Information Processing Systems.
- OpenAI. 2023a. Gpt-4 technical report.
- OpenAI. 2023b. Gpt-4v(ision) system card.
- OpenAI. 2023c. Gpt-4v(ision) technical work and authors. https://cdn.openai.com/contributions/gpt-4v.pdf.
- Training language models to follow instructions with human feedback. In Advances in Neural Information Processing Systems, volume 35, pages 27730–27744. Curran Associates, Inc.
- Learning transferable visual models from natural language supervision. In International Conference on Machine Learning.
- Llama: Open and efficient foundation language models. ArXiv, abs/2302.13971.
- Kota Yamaguchi. 2021. Canvasvae: Learning to generate vector graphic documents. 2021 IEEE/CVF International Conference on Computer Vision (ICCV), pages 5461–5469.
- mplug-owl: Modularization empowers large language models with multimodality. ArXiv, abs/2304.14178.