Emergent Mind

DiffiT: Diffusion Vision Transformers for Image Generation

(2312.02139)
Published Dec 4, 2023 in cs.CV , cs.AI , and cs.LG

Abstract

Diffusion models with their powerful expressivity and high sample quality have achieved State-Of-The-Art (SOTA) performance in the generative domain. The pioneering Vision Transformer (ViT) has also demonstrated strong modeling capabilities and scalability, especially for recognition tasks. In this paper, we study the effectiveness of ViTs in diffusion-based generative learning and propose a new model denoted as Diffusion Vision Transformers (DiffiT). Specifically, we propose a methodology for finegrained control of the denoising process and introduce the Time-dependant Multihead Self Attention (TMSA) mechanism. DiffiT is surprisingly effective in generating high-fidelity images with significantly better parameter efficiency. We also propose latent and image space DiffiT models and show SOTA performance on a variety of class-conditional and unconditional synthesis tasks at different resolutions. The Latent DiffiT model achieves a new SOTA FID score of 1.73 on ImageNet-256 dataset while having 19.85%, 16.88% less parameters than other Transformer-based diffusion models such as MDT and DiT, respectively. Code: https://github.com/NVlabs/DiffiT

Overview of DiffiT model showing convolutional downsampling and upsampling layers.

Overview

  • The paper introduces Diffusion Vision Transformers (DiffiT) with an innovative time-dependent self-attention module for image generation.

  • DiffiT's module adapts attention in the denoising process, leveraging both temporal dynamics of diffusion and spatial image dependencies.

  • It uses a U-shaped encoder-decoder based on vision transformers, adapting structure and attention throughout the image generation stages.

  • DiffiT achieved state-of-the-art results on ImageNet and CIFAR-10 datasets, particularly in latent space generation.

  • The effectiveness of the time-dependant self-attention was confirmed through experiments and ablation studies.

Diffusion models are at the forefront of AI developments in image generation, thanks to their power and high-quality results. Recent advances have seen transformative impacts on various applications from text-to-image generation to complex scene creations that were previously unattainable.

A new paper introduces Diffusion Vision Transformers (DiffiT), proposing an innovative layer into diffusion-based generative learning: the time-dependent self-attention module. This module allows the attention layers within the models to adjust dynamically at different stages of the image denoising process, thereby efficiently adapting to both the temporal dynamics of diffusion and the spatial long-range dependencies within the images.

At the core of this system is a U-shaped encoder-decoder architecture drawing inspiration from vision transformers (ViTs), a very successful family of models in visual AI tasks. Unlike existing denoising diffusion models, DiffiT adapts both its structural and attention elements depending on the time step in the image generation process. This results in different attention focus at the beginning when images are primarily noise, and toward the end when high-frequency details are being refined.

The researchers benchmarked DiffiT on several datasets, including ImageNet and CIFAR-10, achieving state-of-the-art results in both image and latent space generation tasks. Notably, in the latent space generation, used for creating high-resolution images from abstract or compressed representations, DiffiT set a new top performance metric for the ImageNet-256 dataset.

In a series of experiments, the authors demonstrated that the design choices in DiffiT, particularly the integration of the time-dependant self-attention, are crucial. Ablation studies further showed that different configurations of the model's components greatly affect performance. For instance, decoupling spatial and temporal information within the self-attention module resulted in notably poorer results, underscoring the importance of their integration for the model's efficiency.

In conclusion, DiffiT represents a significant advancement in diffusion-based image generation models. With its novel time-dependent self-attention mechanism and transformer-based architecture, it sets new standards in the quality of generated images, displaying impressive control over the synthesis process at various temporal stages. The open-source code repository offers the community a valuable resource to further explore and expand upon these results.

Create an account to read this summary for free:

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.

YouTube
Reddit
DiffiT: Diffusion Vision Transformers for Image Generation (34 points, 5 comments) in /r/singularity