Emergent Mind

Interpolating between Optimal Transport and KL regularized Optimal Transport using Rényi Divergences

(2404.18834)
Published Apr 29, 2024 in math.OC , cs.NA , math.FA , and math.NA

Abstract

Regularized optimal transport (OT) has received much attention in recent years starting from Cuturi's paper with Kullback-Leibler (KL) divergence regularized OT. In this paper, we propose to regularize the OT problem using the family of $\alpha$-R\'enyi divergences for $\alpha \in (0, 1)$. R\'enyi divergences are neither $f$-divergences nor Bregman distances, but they recover the KL divergence in the limit $\alpha \nearrow 1$. The advantage of introducing the additional parameter $\alpha$ is that for $\alpha \searrow 0$ we obtain convergence to the unregularized OT problem. For the KL regularized OT problem, this was achieved by letting the regularization parameter tend to zero, which causes numerical instabilities. We present two different ways to obtain premetrics on probability measures, namely by R\'enyi divergence constraints and penalization. The latter premetric interpolates between the unregularized and KL regularized OT problem with weak convergence of the minimizer, generalizing the interpolating property of KL regularized OT. We use a nested mirror descent algorithm for solving the primal formulation. Both on real and synthetic data sets R\'enyi regularized OT plans outperform their KL and Tsallis counterparts in terms of being closer to the unregularized transport plans and recovering the ground truth in inference tasks better.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.