Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 67 tok/s
Gemini 2.5 Pro 48 tok/s Pro
GPT-5 Medium 25 tok/s Pro
GPT-5 High 18 tok/s Pro
GPT-4o 94 tok/s Pro
Kimi K2 173 tok/s Pro
GPT OSS 120B 444 tok/s Pro
Claude Sonnet 4.5 34 tok/s Pro
2000 character limit reached

Robust algorithms with polynomial loss for near-unanimity CSPs (1607.04787v4)

Published 16 Jul 2016 in cs.DS, cs.CC, and cs.LO

Abstract: An instance of the Constraint Satisfaction Problem (CSP) is given by a family of constraints on overlapping sets of variables, and the goal is to assign values from a fixed domain to the variables so that all constraints are satisfied. In the optimization version, the goal is to maximize the number of satisfied constraints. An approximation algorithm for CSP is called robust if it outputs an assignment satisfying a $(1-g(\varepsilon))$-fraction of constraints on any $(1-\varepsilon)$-satisfiable instance, where the loss function $g$ is such that $g(\varepsilon)\rightarrow 0$ as $\varepsilon\rightarrow 0$. We study how the robust approximability of CSPs depends on the set of constraint relations allowed in instances, the so-called constraint language. All constraint languages admitting a robust polynomial-time algorithm (with some $g$) have been characterised by Barto and Kozik, with the general bound on the loss $g$ being doubly exponential, specifically $g(\varepsilon)=O((\log\log(1/\varepsilon))/\log(1/\varepsilon))$. It is natural to ask when a better loss can be achieved: in particular, polynomial loss $g(\varepsilon)=O(\varepsilon{1/k})$ for some constant $k$. In this paper, we consider CSPs with a constraint language having a near-unanimity polymorphism. We give two randomized robust algorithms with polynomial loss for such CSPs: one works for any near-unanimity polymorphism and the parameter $k$ in the loss depends on the size of the domain and the arity of the relations in $\Gamma$, while the other works for a special ternary near-unanimity operation called dual discriminator with $k=2$ for any domain size. In the latter case, the CSP is a common generalisation of Unique Games with a fixed domain and 2-SAT. In the former case, we use the algebraic approach to the CSP. Both cases use the standard semidefinite programming relaxation for CSP.

Citations (15)

Summary

We haven't generated a summary for this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube