Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 138 tok/s
Gemini 2.5 Pro 49 tok/s Pro
GPT-5 Medium 24 tok/s Pro
GPT-5 High 31 tok/s Pro
GPT-4o 92 tok/s Pro
Kimi K2 189 tok/s Pro
GPT OSS 120B 450 tok/s Pro
Claude Sonnet 4.5 34 tok/s Pro
2000 character limit reached

Near-Optimal UGC-hardness of Approximating Max k-CSP_R (1511.06558v1)

Published 20 Nov 2015 in cs.CC and cs.DS

Abstract: In this paper, we prove an almost-optimal hardness for Max $k$-CSP$_R$ based on Khot's Unique Games Conjecture (UGC). In Max $k$-CSP$_R$, we are given a set of predicates each of which depends on exactly $k$ variables. Each variable can take any value from $1, 2, \dots, R$. The goal is to find an assignment to variables that maximizes the number of satisfied predicates. Assuming the Unique Games Conjecture, we show that it is NP-hard to approximate Max $k$-CSP$_R$ to within factor $2{O(k \log k)}(\log R){k/2}/R{k - 1}$ for any $k, R$. To the best of our knowledge, this result improves on all the known hardness of approximation results when $3 \leq k = o(\log R/\log \log R)$. In this case, the previous best hardness result was NP-hardness of approximating within a factor $O(k/R{k-2})$ by Chan. When $k = 2$, our result matches the best known UGC-hardness result of Khot, Kindler, Mossel and O'Donnell. In addition, by extending an algorithm for Max 2-CSP$_R$ by Kindler, Kolla and Trevisan, we provide an $\Omega(\log R/R{k - 1})$-approximation algorithm for Max $k$-CSP$_R$. This algorithm implies that our inapproximability result is tight up to a factor of $2{O(k \log k)}(\log R){k/2 - 1}$. In comparison, when $3 \leq k$ is a constant, the previously known gap was $O(R)$, which is significantly larger than our gap of $O(\text{polylog } R)$. Finally, we show that we can replace the Unique Games Conjecture assumption with Khot's $d$-to-1 Conjecture and still get asymptotically the same hardness of approximation.

Citations (8)

Summary

We haven't generated a summary for this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.