Emergent Mind

A Linear Reduction Method for Local Differential Privacy and Log-lift

(2101.09689)
Published Jan 24, 2021 in cs.IT , math.IT , and stat.AP

Abstract

This paper considers the problem of publishing data $X$ while protecting correlated sensitive information $S$. We propose a linear method to generate the sanitized data $Y$ with the same alphabet $\mathcal{Y} = \mathcal{X}$ that attains local differential privacy (LDP) and log-lift at the same time. It is revealed that both LDP and log-lift are inversely proportional to the statistical distance between conditional probability $P{Y|S}(x|s)$ and marginal probability $P{Y}(x)$: the closer the two probabilities are, the more private $Y$ is. Specifying $P{Y|S}(x|s)$ that linearly reduces this distance $|P{Y|S}(x|s) - PY(x)| = (1-\alpha)|P{X|S}(x|s) - PX(x)|,\forall s,x$ for some $\alpha \in (0,1]$, we study the problem of how to generate $Y$ from the original data $S$ and $X$. The Markov randomization/sanitization scheme $P{Y|X}(x|x') = P{Y|S,X}(x|s,x')$ is obtained by solving linear equations. The optimal non-Markov sanitization, the transition probability $P{Y|S,X}(x|s,x')$ that depends on $S$, can be determined by maximizing the data utility subject to linear equality constraints. We compute the solution for two linear utility function: the expected distance and total variance distance. It is shown that the non-Markov randomization significantly improves data utility and the marginal probability $PX(x)$ remains the same after the linear sanitization method: $PY(x) = P_X(x), \forall x \in \mathcal{X}$.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.