Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 60 tok/s
Gemini 2.5 Pro 51 tok/s Pro
GPT-5 Medium 18 tok/s Pro
GPT-5 High 14 tok/s Pro
GPT-4o 77 tok/s Pro
Kimi K2 159 tok/s Pro
GPT OSS 120B 456 tok/s Pro
Claude Sonnet 4 38 tok/s Pro
2000 character limit reached

Code Sparsification and its Applications (2311.00788v1)

Published 1 Nov 2023 in cs.DS

Abstract: We introduce a notion of code sparsification that generalizes the notion of cut sparsification in graphs. For a (linear) code $\mathcal{C} \subseteq \mathbb{F}_qn$ of dimension $k$ a $(1 \pm \epsilon)$-sparsification of size $s$ is given by a weighted set $S \subseteq [n]$ with $|S| \leq s$ such that for every codeword $c \in \mathcal{C}$ the projection $c|_S$ of $c$ to the set $S$ has (weighted) hamming weight which is a $(1 \pm \epsilon)$ approximation of the hamming weight of $c$. We show that for every code there exists a $(1 \pm \epsilon)$-sparsification of size $s = \widetilde{O}(k \log (q) / \epsilon2)$. This immediately implies known results on graph and hypergraph cut sparsification up to polylogarithmic factors (with a simple unified proof). One application of our result is near-linear size sparsifiers for constraint satisfaction problems (CSPs) over $\mathbb{F}_p$-valued variables whose unsatisfying assignments can be expressed as the zeros of a linear equation modulo a prime $p$. Building on this, we obtain a complete characterization of ternary Boolean CSPs that admit near-linear size sparsification. Finally, by connections between the eigenvalues of the Laplacians of Cayley graphs over $\mathbb{F}_2k$ to the weights of codewords, we also give the first proof of the existence of spectral Cayley graph sparsifiers over $\mathbb{F}_2k$ by Cayley graphs, i.e., where we sparsify the set of generators to nearly-optimal size.

Citations (2)

Summary

We haven't generated a summary for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Lightbulb On Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube