Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Noisy population recovery in polynomial time (1602.07616v1)

Published 24 Feb 2016 in cs.CC, cs.DS, and cs.LG

Abstract: In the noisy population recovery problem of Dvir et al., the goal is to learn an unknown distribution $f$ on binary strings of length $n$ from noisy samples. For some parameter $\mu \in [0,1]$, a noisy sample is generated by flipping each coordinate of a sample from $f$ independently with probability $(1-\mu)/2$. We assume an upper bound $k$ on the size of the support of the distribution, and the goal is to estimate the probability of any string to within some given error $\varepsilon$. It is known that the algorithmic complexity and sample complexity of this problem are polynomially related to each other. We show that for $\mu > 0$, the sample complexity (and hence the algorithmic complexity) is bounded by a polynomial in $k$, $n$ and $1/\varepsilon$ improving upon the previous best result of $\mathsf{poly}(k{\log\log k},n,1/\varepsilon)$ due to Lovett and Zhang. Our proof combines ideas from Lovett and Zhang with a \emph{noise attenuated} version of M\"{o}bius inversion. In turn, the latter crucially uses the construction of \emph{robust local inverse} due to Moitra and Saks.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Anindya De (52 papers)
  2. Michael Saks (23 papers)
  3. Sijian Tang (1 paper)
Citations (17)

Summary

We haven't generated a summary for this paper yet.