Emergent Mind

Inference via robust optimal transportation: theory and methods

(2301.06297)
Published Jan 16, 2023 in math.ST , stat.ML , and stat.TH

Abstract

Optimal transportation theory and the related $p$-Wasserstein distance ($Wp$, $p\geq 1$) are widely-applied in statistics and machine learning. In spite of their popularity, inference based on these tools has some issues. For instance, it is sensitive to outliers and it may not be even defined when the underlying model has infinite moments. To cope with these problems, first we consider a robust version of the primal transportation problem and show that it defines the {robust Wasserstein distance}, $W{(\lambda)}$, depending on a tuning parameter $\lambda > 0$. Second, we illustrate the link between $W1$ and $W{(\lambda)}$ and study its key measure theoretic aspects. Third, we derive some concentration inequalities for $W{(\lambda)}$. Fourth, we use $W{(\lambda)}$ to define minimum distance estimators, we provide their statistical guarantees and we illustrate how to apply the derived concentration inequalities for a data driven selection of $\lambda$. Fifth, we provide the {dual} form of the robust optimal transportation problem and we apply it to machine learning problems (generative adversarial networks and domain adaptation). Numerical exercises provide evidence of the benefits yielded by our novel methods.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.