Near-Optimal UGC-hardness of Approximating Max k-CSP_R (1511.06558v1)
Abstract: In this paper, we prove an almost-optimal hardness for Max $k$-CSP$_R$ based on Khot's Unique Games Conjecture (UGC). In Max $k$-CSP$_R$, we are given a set of predicates each of which depends on exactly $k$ variables. Each variable can take any value from $1, 2, \dots, R$. The goal is to find an assignment to variables that maximizes the number of satisfied predicates. Assuming the Unique Games Conjecture, we show that it is NP-hard to approximate Max $k$-CSP$_R$ to within factor $2{O(k \log k)}(\log R){k/2}/R{k - 1}$ for any $k, R$. To the best of our knowledge, this result improves on all the known hardness of approximation results when $3 \leq k = o(\log R/\log \log R)$. In this case, the previous best hardness result was NP-hardness of approximating within a factor $O(k/R{k-2})$ by Chan. When $k = 2$, our result matches the best known UGC-hardness result of Khot, Kindler, Mossel and O'Donnell. In addition, by extending an algorithm for Max 2-CSP$_R$ by Kindler, Kolla and Trevisan, we provide an $\Omega(\log R/R{k - 1})$-approximation algorithm for Max $k$-CSP$_R$. This algorithm implies that our inapproximability result is tight up to a factor of $2{O(k \log k)}(\log R){k/2 - 1}$. In comparison, when $3 \leq k$ is a constant, the previously known gap was $O(R)$, which is significantly larger than our gap of $O(\text{polylog } R)$. Finally, we show that we can replace the Unique Games Conjecture assumption with Khot's $d$-to-1 Conjecture and still get asymptotically the same hardness of approximation.
Collections
Sign up for free to add this paper to one or more collections.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.