Emergent Mind

Learning Sparse Parity with Noise in Linear Samples

(2407.19215)
Published Jul 27, 2024 in cs.CR

Abstract

We revisit the learning parity with noise problem with a sparse secret that involves at most $k$ out of $n$ variables. Let $\eta$ denote the noise rate such that each label gets flipped with probability $\eta$. In this work, we show algorithms in the low-noise setting and high-noise setting separately. We present an algorithm of running time $O(\eta \cdot n/k)k$ for any $\eta$ and $k$ satisfying $n>k/\eta$. This improves the state-of-the-art for learning sparse parity in a wide range of parameters like $k\le n{0.99}$ and $\eta < \sqrt{k/n}$, where the best known algorithm had running time at least ${\binom{n}{k/2}} \ge (n/k){k/2}$. Different from previous approaches based on generating biased samples , our new idea is to combine subset sampling and Gaussian elimination. The resulting algorithm just needs $O(k/\eta + k \log \frac{n}{k})$ samples and is structurally simpler than previous algorithms. In the high-noise setting, we present an improvement on Valiant's classical algorithm using $n{\frac{\omega+o(1)}{3}\cdot k}$ time (with the matrix multiplication constant $\omega$) and $\tilde{O}(k2)$ samples. For any $\eta<1/2$, our algorithm has time complexity $(n/k){\frac{\omega+o(1)}{3}\cdot k}$ and sample complexity $\tilde{O}(k)$. Hence it improves Valiant's algorithm in terms of both time complexity and sample complexity and generalizes Valiant's framework to give the state-of-the-art bound for any $k \le n{0.99}$ and $\eta \in (0.4,0.5)$.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.