Emergent Mind

Abstract

Over the last 30 years a plethora of variational regularisation models for image reconstruction has been proposed and thoroughly inspected by the applied mathematics community. Among them, the pioneering prototype often taught and learned in basic courses in mathematical image processing is the celebrated Rudin-Osher-Fatemi (ROF) model \cite{ROF} which relies on the minimisation of the edge-preserving Total Variation (TV) semi-norm as regularisation term. Despite its (often limiting) simplicity, this model is still very much employed in many applications and used as a benchmark for assessing the performance of modern learning-based image reconstruction approaches, thanks to its thorough analytical and numerical understanding. Among the many extensions to TV proposed over the years, a large class is based on the concept of \emph{space variance}. Space-variant models can indeed overcome the intrinsic inability of TV to describe \emph{local} features (strength, sharpness, directionality) by means of an adaptive mathematical modelling which accommodates local regularisation weighting, variable smoothness and anisotropy. Those ideas can further be cast in the flexible Bayesian framework of generalised Gaussian distributions and combined with maximum likelihood and hierarchical optimisation approaches for efficient hyper-parameter estimation. In this work, we review and connect the major contributions in the field of space-variant TV-type image reconstruction models, focusing, in particular, on their Bayesian interpretation which paves the way to new exciting and unexplored research directions.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.