Emergent Mind

Multi-User Privacy Mechanism Design with Non-zero Leakage

(2211.15525)
Published Nov 28, 2022 in cs.IT and math.IT

Abstract

A privacy mechanism design problem is studied through the lens of information theory. In this work, an agent observes useful data $Y=(Y1,...,YN)$ that is correlated with private data $X=(X1,...,XN)$ which is assumed to be also accessible by the agent. Here, we consider $K$ users where user $i$ demands a sub-vector of $Y$, denoted by $C{i}$. The agent wishes to disclose $C{i}$ to user $i$. Since $C{i}$ is correlated with $X$ it can not be disclosed directly. A privacy mechanism is designed to generate disclosed data $U$ which maximizes a linear combinations of the users utilities while satisfying a bounded privacy constraint in terms of mutual information. In a similar work it has been assumed that $Xi$ is a deterministic function of $Yi$, however in this work we let $Xi$ and $Yi$ be arbitrarily correlated. First, an upper bound on the privacy-utility trade-off is obtained by using a specific transformation, Functional Representation Lemma and Strong Functional Representaion Lemma, then we show that the upper bound can be decomposed into $N$ parallel problems. Next, lower bounds on privacy-utility trade-off are derived using Functional Representation Lemma and Strong Functional Representaion Lemma. The upper bound is tight within a constant and the lower bounds assert that the disclosed data is independent of all ${Xj}_{i=1}N$ except one which we allocate the maximum allowed leakage to it. Finally, the obtained bounds are studied in special cases.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.