Emergent Mind

Abstract

In this paper, we present a new iterative rounding framework for many clustering problems. Using this, we obtain an $(\alpha1 + \epsilon \leq 7.081 + \epsilon)$-approximation algorithm for $k$-median with outliers, greatly improving upon the large implicit constant approximation ratio of Chen [Chen, SODA 2018]. For $k$-means with outliers, we give an $(\alpha2+\epsilon \leq 53.002 + \epsilon)$-approximation, which is the first $O(1)$-approximation for this problem. The iterative algorithm framework is very versatile; we show how it can be used to give $\alpha1$- and $(\alpha1 + \epsilon)$-approximation algorithms for matroid and knapsack median problems respectively, improving upon the previous best approximations ratios of $8$ [Swamy, ACM Trans. Algorithms] and $17.46$ [Byrka et al, ESA 2015]. The natural LP relaxation for the $k$-median/$k$-means with outliers problem has an unbounded integrality gap. In spite of this negative result, our iterative rounding framework shows that we can round an LP solution to an almost-integral solution of small cost, in which we have at most two fractionally open facilities. Thus, the LP integrality gap arises due to the gap between almost-integral and fully-integral solutions. Then, using a pre-processing procedure, we show how to convert an almost-integral solution to a fully-integral solution losing only a constant-factor in the approximation ratio. By further using a sparsification technique, the additive factor loss incurred by the conversion can be reduced to any $\epsilon > 0$.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.