Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Learning Diverse Image Colorization (1612.01958v2)

Published 6 Dec 2016 in cs.CV

Abstract: Colorization is an ambiguous problem, with multiple viable colorizations for a single grey-level image. However, previous methods only produce the single most probable colorization. Our goal is to model the diversity intrinsic to the problem of colorization and produce multiple colorizations that display long-scale spatial co-ordination. We learn a low dimensional embedding of color fields using a variational autoencoder (VAE). We construct loss terms for the VAE decoder that avoid blurry outputs and take into account the uneven distribution of pixel colors. Finally, we build a conditional model for the multi-modal distribution between grey-level image and the color field embeddings. Samples from this conditional model result in diverse colorization. We demonstrate that our method obtains better diverse colorizations than a standard conditional variational autoencoder (CVAE) model, as well as a recently proposed conditional generative adversarial network (cGAN).

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Aditya Deshpande (13 papers)
  2. Jiajun Lu (12 papers)
  3. Mao-Chuang Yeh (6 papers)
  4. Min Jin Chong (10 papers)
  5. David Forsyth (54 papers)
Citations (184)

Summary

  • The paper introduces a novel framework that couples a VAE for low-dimensional color embedding with an MDN for predicting diverse, realistic colorizations.
  • The method overcomes common VAE-induced blurriness by employing custom loss functions to effectively manage the non-uniform distribution of pixel colors.
  • Evaluations on datasets like LFW, LSUN Church, and ImageNet-Val demonstrate significant improvements in diversity and spatial coherence compared to prior methods.

Learning Diverse Image Colorization

This paper addresses the inherently ambiguous task of colorizing grey-scale images, where multiple plausible colorizations can exist for the same image. Traditional methods tend to produce only the single most probable colorization, whereas the approach proposed by Deshpande et al. aims to capture the diversity intrinsic to the task by generating multiple, spatially coherent and realistic colorizations. The authors employ a variational autoencoder (VAE) to learn a low-dimensional embedding of color fields, avoiding common issues of blurred output through tailored loss functions.

Methodological Overview

The paper proposes a two-step strategy for achieving diverse image colorization:

  1. Low-Dimensional Embedding with VAE: The authors utilize a VAE to encode the color fields into a low-dimensional latent space. Custom loss terms are introduced in the VAE decoder to prevent blurriness and account for the non-uniform distribution of pixel colors—addressing the tendency for VAE models to produce overly smooth outputs. The loss enhances specificity and colorfulness, importantly distributing error across less common colors.
  2. Conditional Modeling with MDN: To link grey-scale images and the learned embeddings, a Mixture Density Network (MDN) is employed. This model predicts a multi-modal distribution over the embeddings given a grey-level image, enabling the generation of diverse colorizations. During training, the minimization approach circumvents complexity by focusing on the closest Gaussian component in the model, promoting robust learning across high-dimensional spaces.

Evaluation and Results

The method is validated against standard benchmarks, demonstrating superior performance over existing models like conditional variational autoencoders (CVAE) and conditional generative adversarial networks (cGAN). The authors report substantial improvements in image diversity, with their approach not only generating diverse but also realistic spatially coordinated color fields. Quantitative metrics reveal significant enhancements in both variability of colorizations and alignment with ground-truth images.

Interestingly, the paper investigates various datasets from aligned (LFW) to unaligned and diverse scenes (LSUN Church, ImageNet-Val), indicating the flexibility of the proposed approach. Notably, the custom loss terms in the VAE prove crucial for maintaining color quality, achieving lower absolute error metrics compared to traditional L₂ loss methods.

Implications and Future Directions

From a practical standpoint, this model opens new avenues for automated image editing and restoration, offering creative controls through diverse colorization outputs. Theoretically, this work contributes to the literature on image generation by synthesizing the benefits of VAE's generative capacity with MDN’s multi-modal prediction capability, presenting a framework adaptable to other vision tasks involving similar ambiguities.

Future work could extend this strategy to improve the spatial detail captured in the embeddings or adapt the methodology to other domains requiring diverse predictions. Additionally, refining the balance between diversity and fidelity remains an open challenge that could further enhance applications in dynamic generative tasks.