Emergent Mind

Diffusion Posterior Sampling for General Noisy Inverse Problems

(2209.14687)
Published Sep 29, 2022 in stat.ML , cs.AI , cs.CV , and cs.LG

Abstract

Diffusion models have been recently studied as powerful generative inverse problem solvers, owing to their high quality reconstructions and the ease of combining existing iterative solvers. However, most works focus on solving simple linear inverse problems in noiseless settings, which significantly under-represents the complexity of real-world problems. In this work, we extend diffusion solvers to efficiently handle general noisy (non)linear inverse problems via approximation of the posterior sampling. Interestingly, the resulting posterior sampling scheme is a blended version of diffusion sampling with the manifold constrained gradient without a strict measurement consistency projection step, yielding a more desirable generative path in noisy settings compared to the previous studies. Our method demonstrates that diffusion models can incorporate various measurement noise statistics such as Gaussian and Poisson, and also efficiently handle noisy nonlinear inverse problems such as Fourier phase retrieval and non-uniform deblurring. Code available at https://github.com/DPS2022/diffusion-posterior-sampling

Overview

  • Diffusion models are explored for solving noisy inverse problems by approximating the posterior distribution without the need for spectral domain computations.

  • The paper introduces an efficient general framework that can integrate different types of noise statistics, such as Gaussian and Poisson, in inverse problems.

  • An approximation of the likelihood for diffusion models is derived, enabling effective scaling to nonlinear problems using automatic differentiation.

  • Two algorithms are developed specifically for noise models with Gaussian and Poisson distributions, improving performance in diverse inverse problem scenarios.

  • Experimental results show that Diffusion Posterior Sampling outperforms state-of-the-art methods in terms of key performance metrics, especially in the presence of high measurement noise.

Introduction

In recent years, diffusion models have garnered significant attention in the generative modeling community. These models offer a novel paradigm for representation and distribution learning. Particularly, their potential application in addressing inverse problems is a promising research avenue. Inverse problems, prevalent in various scientific fields, are inherently challenging, generally ill-posed, and often corrupted by noise.

Novel Methodology

This paper presents an extension of diffusion models tailored for noisy (non)linear inverse problems through a novel posterior sampling approximation method. The authors provide an efficient general framework, demonstrating the integration of diverse measurement noise statistics, such as Gaussian and Poisson distributions, and the ability to handle non-uniform deblurring and Fourier phase retrieval problems. The proposed method eschews the necessity for spectral domain computations and singular value decomposition (SVD), facilitating its application to a broader class of inverse problems.

Mathematical Foundation and Algorithm

The authors derive their method based on the approximation of the likelihood for diffusion models, introducing an approximate gradient for the log-likelihood term. This approach handles the measurement noise and scales effectively to nonlinear problems when gradients can be computed via automatic differentiation. A novel approximation is employed for the posterior mean using Tweedie's formula, enabling approximate posterior sampling. The approximation error is quantified by the Jensen gap, providing a formal bound under Gaussian assumptions.

Algorithmically, the process hinges on a modified sequence of updates from an intractable to a tractable distribution, tailored to accommodate noise statistics appropriately. Two algorithms are tailored for Gaussian and Poisson noise models, respectively. The manifold constrained gradients (MCG) method, previously proposed for noiseless setups, is revealed as a particular case of the authors' approach when handling noiseless measurements.

Experimental Results

The provided experimental validations span a variety of inverse problems, including inpainting, super-resolution, and deblurring, and the more challenging nonlinear inverse problems of phase retrieval and non-uniform deblurring. The experimentation demonstrates significant performance improvements over existing state-of-the-art methods across a range of qualitative and quantitative metrics such as PSNR, SSIM, FID, and LPIPS, particularly in settings with greater levels of measurement noise.

Conclusion

Diffusion Posterior Sampling (DPS) establishes a robust methodology for solving a wide spectrum of general noisy inverse problems, holding potential for both linear and notably, nonlinear scenarios. The formulation circumvents explicit measurement distribution dependency, allowing the method to generalize across various noise models and inverse problem families efficiently. The results indicate the superiority of DPS, paving the way for further exploration and refinement of diffusion-based methods in inverse problem solving.

Create an account to read this summary for free:

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.