Emergent Mind

Interpretation of Plug-and-Play (PnP) algorithms from a different angle

(2106.07795)
Published Jun 14, 2021 in math.OC , cs.NA , and math.NA

Abstract

It's well-known that inverse problems are ill-posed and to solve them meaningfully one has to employ regularization methods. Traditionally, the most popular regularization approaches are Variational-type approaches, i.e., penalized/constrained functional minimization. In recent years, the classical regularization approaches have been replaced by the so-called plug-and-play (PnP) algorithms, which copies the proximal gradient minimization processes, such as ADMM or FISTA, but with any general denoiser. However, unlike the traditional proximal gradient methods, the theoretical analysis and convergence results have been insufficient for these PnP-algorithms. Hence, the results from these algorithms, though empirically outstanding, are not well-defined, in the sense of, being a minimizer of a Variational problem. In this paper, we address this question of "well-definedness", but from a different angle. We explain these algorithms from the viewpoint of a semi-iterative regularization method. In addition, we expand the family of regularized solutions, corresponding to the classical semi-iterative methods, to further generalize the explainability of these algorithms, as well as, enhance the recovery process. We conclude with several numerical results which validate the developed theories and reflect the improvements over the traditional PnP-algorithms, such as ADMM-PnP and FISTA-PnP.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.