Emergent Mind

Abstract

Longitudinal data tracking under Local Differential Privacy (LDP) is a challenging task. Baseline solutions that repeatedly invoke a protocol designed for one-time computation lead to linear decay in the privacy or utility guarantee with respect to the number of computations. To avoid this, the recent approach of Erlingsson et al. (2020) exploits the potential sparsity of user data that changes only infrequently. Their protocol targets the fundamental problem of frequency estimation protocol for longitudinal binary data, with $\ell_\infty$ error of $O ( (1 / \epsilon) \cdot (\log d){3 / 2} \cdot k \cdot \sqrt{ n \cdot \log ( d / \beta ) } )$, where $\epsilon$ is the privacy budget, $d$ is the number of time periods, $k$ is the maximum number of changes of user data, and $\beta$ is the failure probability. Notably, the error bound scales polylogarithmically with $d$, but linearly with $k$. In this paper, we break through the linear dependence on $k$ in the estimation error. Our new protocol has error $O ( (1 / \epsilon) \cdot (\log d) \cdot \sqrt{ k \cdot n \cdot \log ( d / \beta ) } )$, matching the lower bound up to a logarithmic factor. The protocol is an online one, that outputs an estimate at each time period. The key breakthrough is a new randomizer for sequential data, FutureRand, with two key features. The first is a composition strategy that correlates the noise across the non-zero elements of the sequence. The second is a pre-computation technique which, by exploiting the symmetry of input space, enables the randomizer to output the results on the fly, without knowing future inputs. Our protocol closes the error gap between existing online and offline algorithms.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.