Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Unveiling and Mitigating Memorization in Text-to-image Diffusion Models through Cross Attention (2403.11052v1)

Published 17 Mar 2024 in cs.CV and cs.CR

Abstract: Recent advancements in text-to-image diffusion models have demonstrated their remarkable capability to generate high-quality images from textual prompts. However, increasing research indicates that these models memorize and replicate images from their training data, raising tremendous concerns about potential copyright infringement and privacy risks. In our study, we provide a novel perspective to understand this memorization phenomenon by examining its relationship with cross-attention mechanisms. We reveal that during memorization, the cross-attention tends to focus disproportionately on the embeddings of specific tokens. The diffusion model is overfitted to these token embeddings, memorizing corresponding training images. To elucidate this phenomenon, we further identify and discuss various intrinsic findings of cross-attention that contribute to memorization. Building on these insights, we introduce an innovative approach to detect and mitigate memorization in diffusion models. The advantage of our proposed method is that it will not compromise the speed of either the training or the inference processes in these models while preserving the quality of generated images. Our code is available at https://github.com/renjie3/MemAttn .

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Jie Ren (329 papers)
  2. Yaxin Li (27 papers)
  3. Han Xu (92 papers)
  4. Lingjuan Lyu (131 papers)
  5. Yue Xing (47 papers)
  6. Jiliang Tang (204 papers)
  7. Shenglai Zeng (19 papers)
Citations (15)

Summary

We haven't generated a summary for this paper yet.