Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 42 tok/s
Gemini 2.5 Pro 53 tok/s Pro
GPT-5 Medium 17 tok/s Pro
GPT-5 High 13 tok/s Pro
GPT-4o 101 tok/s Pro
Kimi K2 217 tok/s Pro
GPT OSS 120B 474 tok/s Pro
Claude Sonnet 4 36 tok/s Pro
2000 character limit reached

Non-Malleable Extractors, Two-Source Extractors and Privacy Amplification (1112.1045v2)

Published 5 Dec 2011 in cs.CR and cs.CC

Abstract: Dodis and Wichs introduced the notion of a non-malleable extractor to study the problem of privacy amplification with an active adversary. A non-malleable extractor is a much stronger version of a strong extractor. Previously, there are only two known constructions of non-malleable extractors. Both constructions only work for (n, k)-sources with k>n/2. Interestingly, both constructions are also two-source extractors. In this paper, we present a strong connection between non-malleable extractors and two-source extractors. The first part of the connection shows that non-malleable extractors can be used to construct two-source extractors. With appropriate parameters the resulted two-source extractor beats the best known construction of two-source extractors. This partially explains why previous constructions of non-malleable extractors only work for sources with entropy rate >1/2, and why explicit non-malleable extractors for small min-entropy may be hard to get. The second part of the connection shows that certain two-source extractors can be used to construct non-malleable extractors. Using this connection, we obtain the first construction of non-malleable extractors for k < n/2. Specifically, we give an unconditional construction for min-entropy k=(1/2-\delta)n for some constant \delta>0, and a conditional (semi-explicit) construction that can potentially achieve k=\alpha n for any constant \alpha>0. Finally, despite the lack of explicit non-malleable extractors for arbitrarily linear entropy, we give the first 2-round privacy amplification protocol with asymptotically optimal entropy loss and communication complexity for (n, k) sources with k=\alpha n for any constant \alpha>0. This dramatically improves previous results and answers an open problem in \cite{DLWZ11}.

Citations (58)

Summary

We haven't generated a summary for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Lightbulb On Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (1)

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube