Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 37 tok/s
Gemini 2.5 Pro 41 tok/s Pro
GPT-5 Medium 10 tok/s Pro
GPT-5 High 15 tok/s Pro
GPT-4o 84 tok/s Pro
Kimi K2 198 tok/s Pro
GPT OSS 120B 448 tok/s Pro
Claude Sonnet 4 31 tok/s Pro
2000 character limit reached

Deterministic Extractors for Additive Sources (1410.7253v1)

Published 27 Oct 2014 in cs.CC

Abstract: We propose a new model of a weakly random source that admits randomness extraction. Our model of additive sources includes such natural sources as uniform distributions on arithmetic progressions (APs), generalized arithmetic progressions (GAPs), and Bohr sets, each of which generalizes affine sources. We give an explicit extractor for additive sources with linear min-entropy over both $\mathbb{Z}_p$ and $\mathbb{Z}_pn$, for large prime $p$, although our results over $\mathbb{Z}_pn$ require that the source further satisfy a list-decodability condition. As a corollary, we obtain explicit extractors for APs, GAPs, and Bohr sources with linear min-entropy, although again our results over $\mathbb{Z}_pn$ require the list-decodability condition. We further explore special cases of additive sources. We improve previous constructions of line sources (affine sources of dimension 1), requiring a field of size linear in $n$, rather than $\Omega(n2)$ by Gabizon and Raz. This beats the non-explicit bound of $\Theta(n \log n)$ obtained by the probabilistic method. We then generalize this result to APs and GAPs.

Citations (1)

Summary

We haven't generated a summary for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Lightbulb On Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.