Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 153 tok/s
Gemini 2.5 Pro 50 tok/s Pro
GPT-5 Medium 20 tok/s Pro
GPT-5 High 28 tok/s Pro
GPT-4o 79 tok/s Pro
Kimi K2 198 tok/s Pro
GPT OSS 120B 428 tok/s Pro
Claude Sonnet 4.5 38 tok/s Pro
2000 character limit reached

Constant Approximation for $k$-Median and $k$-Means with Outliers via Iterative Rounding (1711.01323v2)

Published 3 Nov 2017 in cs.DS

Abstract: In this paper, we present a new iterative rounding framework for many clustering problems. Using this, we obtain an $(\alpha_1 + \epsilon \leq 7.081 + \epsilon)$-approximation algorithm for $k$-median with outliers, greatly improving upon the large implicit constant approximation ratio of Chen [Chen, SODA 2018]. For $k$-means with outliers, we give an $(\alpha_2+\epsilon \leq 53.002 + \epsilon)$-approximation, which is the first $O(1)$-approximation for this problem. The iterative algorithm framework is very versatile; we show how it can be used to give $\alpha_1$- and $(\alpha_1 + \epsilon)$-approximation algorithms for matroid and knapsack median problems respectively, improving upon the previous best approximations ratios of $8$ [Swamy, ACM Trans. Algorithms] and $17.46$ [Byrka et al, ESA 2015]. The natural LP relaxation for the $k$-median/$k$-means with outliers problem has an unbounded integrality gap. In spite of this negative result, our iterative rounding framework shows that we can round an LP solution to an almost-integral solution of small cost, in which we have at most two fractionally open facilities. Thus, the LP integrality gap arises due to the gap between almost-integral and fully-integral solutions. Then, using a pre-processing procedure, we show how to convert an almost-integral solution to a fully-integral solution losing only a constant-factor in the approximation ratio. By further using a sparsification technique, the additive factor loss incurred by the conversion can be reduced to any $\epsilon > 0$.

Citations (108)

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.