Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 63 tok/s
Gemini 2.5 Pro 49 tok/s Pro
GPT-5 Medium 14 tok/s Pro
GPT-5 High 19 tok/s Pro
GPT-4o 100 tok/s Pro
Kimi K2 174 tok/s Pro
GPT OSS 120B 472 tok/s Pro
Claude Sonnet 4 37 tok/s Pro
2000 character limit reached

DiffiT: Diffusion Vision Transformers for Image Generation (2312.02139v3)

Published 4 Dec 2023 in cs.CV, cs.AI, and cs.LG

Abstract: Diffusion models with their powerful expressivity and high sample quality have achieved State-Of-The-Art (SOTA) performance in the generative domain. The pioneering Vision Transformer (ViT) has also demonstrated strong modeling capabilities and scalability, especially for recognition tasks. In this paper, we study the effectiveness of ViTs in diffusion-based generative learning and propose a new model denoted as Diffusion Vision Transformers (DiffiT). Specifically, we propose a methodology for finegrained control of the denoising process and introduce the Time-dependant Multihead Self Attention (TMSA) mechanism. DiffiT is surprisingly effective in generating high-fidelity images with significantly better parameter efficiency. We also propose latent and image space DiffiT models and show SOTA performance on a variety of class-conditional and unconditional synthesis tasks at different resolutions. The Latent DiffiT model achieves a new SOTA FID score of 1.73 on ImageNet256 dataset while having 19.85%, 16.88% less parameters than other Transformer-based diffusion models such as MDT and DiT,respectively. Code: https://github.com/NVlabs/DiffiT

Citations (37)
List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Summary

  • The paper presents DiffiT, which integrates a time-dependent self-attention module into a U-shaped vision transformer architecture to improve image denoising across stages.
  • The methodology achieves state-of-the-art performance on benchmarks like ImageNet-256 and CIFAR-10, particularly enhancing latent space image generation.
  • Ablation studies confirm that coupling spatial and temporal attention is crucial for optimal image synthesis, underscoring the design's innovative impact.

Diffusion models are at the forefront of AI developments in image generation, thanks to their power and high-quality results. Recent advances have seen transformative impacts on various applications from text-to-image generation to complex scene creations that were previously unattainable.

A new paper introduces Diffusion Vision Transformers (DiffiT), proposing an innovative layer into diffusion-based generative learning: the time-dependent self-attention module. This module allows the attention layers within the models to adjust dynamically at different stages of the image denoising process, thereby efficiently adapting to both the temporal dynamics of diffusion and the spatial long-range dependencies within the images.

At the core of this system is a U-shaped encoder-decoder architecture drawing inspiration from vision transformers (ViTs), a very successful family of models in visual AI tasks. Unlike existing denoising diffusion models, DiffiT adapts both its structural and attention elements depending on the time step in the image generation process. This results in different attention focus at the beginning when images are primarily noise, and toward the end when high-frequency details are being refined.

The researchers benchmarked DiffiT on several datasets, including ImageNet and CIFAR-10, achieving state-of-the-art results in both image and latent space generation tasks. Notably, in the latent space generation, used for creating high-resolution images from abstract or compressed representations, DiffiT set a new top performance metric for the ImageNet-256 dataset.

In a series of experiments, the authors demonstrated that the design choices in DiffiT, particularly the integration of the time-dependant self-attention, are crucial. Ablation studies further showed that different configurations of the model's components greatly affect performance. For instance, decoupling spatial and temporal information within the self-attention module resulted in notably poorer results, underscoring the importance of their integration for the model's efficiency.

In conclusion, DiffiT represents a significant advancement in diffusion-based image generation models. With its novel time-dependent self-attention mechanism and transformer-based architecture, it sets new standards in the quality of generated images, displaying impressive control over the synthesis process at various temporal stages. The open-source code repository offers the community a valuable resource to further explore and expand upon these results.

Github Logo Streamline Icon: https://streamlinehq.com
Youtube Logo Streamline Icon: https://streamlinehq.com
Reddit Logo Streamline Icon: https://streamlinehq.com