Emergent Mind

Near-Optimal UGC-hardness of Approximating Max k-CSP_R

(1511.06558)
Published Nov 20, 2015 in cs.CC and cs.DS

Abstract

In this paper, we prove an almost-optimal hardness for Max $k$-CSP$R$ based on Khot's Unique Games Conjecture (UGC). In Max $k$-CSP$R$, we are given a set of predicates each of which depends on exactly $k$ variables. Each variable can take any value from $1, 2, \dots, R$. The goal is to find an assignment to variables that maximizes the number of satisfied predicates. Assuming the Unique Games Conjecture, we show that it is NP-hard to approximate Max $k$-CSP$R$ to within factor $2{O(k \log k)}(\log R){k/2}/R{k - 1}$ for any $k, R$. To the best of our knowledge, this result improves on all the known hardness of approximation results when $3 \leq k = o(\log R/\log \log R)$. In this case, the previous best hardness result was NP-hardness of approximating within a factor $O(k/R{k-2})$ by Chan. When $k = 2$, our result matches the best known UGC-hardness result of Khot, Kindler, Mossel and O'Donnell. In addition, by extending an algorithm for Max 2-CSP$R$ by Kindler, Kolla and Trevisan, we provide an $\Omega(\log R/R{k - 1})$-approximation algorithm for Max $k$-CSP$_R$. This algorithm implies that our inapproximability result is tight up to a factor of $2{O(k \log k)}(\log R){k/2 - 1}$. In comparison, when $3 \leq k$ is a constant, the previously known gap was $O(R)$, which is significantly larger than our gap of $O(\text{polylog } R)$. Finally, we show that we can replace the Unique Games Conjecture assumption with Khot's $d$-to-1 Conjecture and still get asymptotically the same hardness of approximation.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.