Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 166 tok/s
Gemini 2.5 Pro 50 tok/s Pro
GPT-5 Medium 26 tok/s Pro
GPT-5 High 22 tok/s Pro
GPT-4o 88 tok/s Pro
Kimi K2 210 tok/s Pro
GPT OSS 120B 461 tok/s Pro
Claude Sonnet 4.5 36 tok/s Pro
2000 character limit reached

Randomize the Future: Asymptotically Optimal Locally Private Frequency Estimation Protocol for Longitudinal Data (2112.12279v2)

Published 22 Dec 2021 in cs.CR

Abstract: Longitudinal data tracking under Local Differential Privacy (LDP) is a challenging task. Baseline solutions that repeatedly invoke a protocol designed for one-time computation lead to linear decay in the privacy or utility guarantee with respect to the number of computations. To avoid this, the recent approach of Erlingsson et al. (2020) exploits the potential sparsity of user data that changes only infrequently. Their protocol targets the fundamental problem of frequency estimation protocol for longitudinal binary data, with $\ell_\infty$ error of $O ( (1 / \epsilon) \cdot (\log d){3 / 2} \cdot k \cdot \sqrt{ n \cdot \log ( d / \beta ) } )$, where $\epsilon$ is the privacy budget, $d$ is the number of time periods, $k$ is the maximum number of changes of user data, and $\beta$ is the failure probability. Notably, the error bound scales polylogarithmically with $d$, but linearly with $k$. In this paper, we break through the linear dependence on $k$ in the estimation error. Our new protocol has error $O ( (1 / \epsilon) \cdot (\log d) \cdot \sqrt{ k \cdot n \cdot \log ( d / \beta ) } )$, matching the lower bound up to a logarithmic factor. The protocol is an online one, that outputs an estimate at each time period. The key breakthrough is a new randomizer for sequential data, FutureRand, with two key features. The first is a composition strategy that correlates the noise across the non-zero elements of the sequence. The second is a pre-computation technique which, by exploiting the symmetry of input space, enables the randomizer to output the results on the fly, without knowing future inputs. Our protocol closes the error gap between existing online and offline algorithms.

Citations (4)

Summary

We haven't generated a summary for this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube