Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 134 tok/s
Gemini 2.5 Pro 41 tok/s Pro
GPT-5 Medium 28 tok/s Pro
GPT-5 High 39 tok/s Pro
GPT-4o 101 tok/s Pro
Kimi K2 191 tok/s Pro
GPT OSS 120B 428 tok/s Pro
Claude Sonnet 4.5 37 tok/s Pro
2000 character limit reached

Isolated Diffusion: Optimizing Multi-Concept Text-to-Image Generation Training-Freely with Isolated Diffusion Guidance (2403.16954v2)

Published 25 Mar 2024 in cs.CV

Abstract: Large-scale text-to-image diffusion models have achieved great success in synthesizing high-quality and diverse images given target text prompts. Despite the revolutionary image generation ability, current state-of-the-art models still struggle to deal with multi-concept generation accurately in many cases. This phenomenon is known as ``concept bleeding" and displays as the unexpected overlapping or merging of various concepts. This paper presents a general approach for text-to-image diffusion models to address the mutual interference between different subjects and their attachments in complex scenes, pursuing better text-image consistency. The core idea is to isolate the synthesizing processes of different concepts. We propose to bind each attachment to corresponding subjects separately with split text prompts. Besides, we introduce a revision method to fix the concept bleeding problem in multi-subject synthesis. We first depend on pre-trained object detection and segmentation models to obtain the layouts of subjects. Then we isolate and resynthesize each subject individually with corresponding text prompts to avoid mutual interference. Overall, we achieve a training-free strategy, named Isolated Diffusion, to optimize multi-concept text-to-image synthesis. It is compatible with the latest Stable Diffusion XL (SDXL) and prior Stable Diffusion (SD) models. We compare our approach with alternative methods using a variety of multi-concept text prompts and demonstrate its effectiveness with clear advantages in text-image consistency and user study.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (8)
  1. Mathematics into Type, American Mathematical Society. Online available:
  2. The LATEXCompanion, by F. Mittelbach and M. Goossens
  3. More Math into LaTeX, by G. Grätzer
  4. AMS-StyleGuide-online.pdf, published by the American Mathematical Society
  5. H. Sira-Ramirez. “On the sliding mode control of nonlinear systems,” Systems & Control Letters, vol. 19, pp. 303–312, 1992.
  6. A. Levant. “Exact differentiation of signals with unbounded higher derivatives,” in Proceedings of the 45th IEEE Conference on Decision and Control, San Diego, California, USA, pp. 5585–5590, 2006.
  7. M. Fliess, C. Join, and H. Sira-Ramirez. “Non-linear estimation is easy,” International Journal of Modelling, Identification and Control, vol. 4, no. 1, pp. 12–27, 2008.
  8. R. Ortega, A. Astolfi, G. Bastin, and H. Rodriguez. “Stabilization of food-chain systems using a port-controlled Hamiltonian description,” in Proceedings of the American Control Conference, Chicago, Illinois, USA, pp. 2245–2249, 2000.
Citations (2)

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.