Emergent Mind

Abstract

Several randomization mechanisms for local differential privacy (LDP) (e.g., randomized response) are well-studied to improve the utility. However, recent studies show that LDP is generally vulnerable to malicious data providers in nature. Because a data collector has to estimate background data distribution only from already randomized data, malicious data providers can manipulate their output before sending, i.e., randomization would provide them plausible deniability. Attackers can skew the estimations effectively since they are calculated by normalizing with randomization probability defined in the LDP protocol, and can even control the estimations. In this paper, we show how we prevent malicious attackers from compromising LDP protocol. Our approach is to utilize a verifiable randomization mechanism. The data collector can verify the completeness of executing an agreed randomization mechanism for every data provider. Our proposed method completely protects the LDP protocol from output-manipulations, and significantly mitigates the expected damage from attacks. We do not assume any specific attacks, and it works effectively against general output-manipulation, and thus is more powerful than previously proposed countermeasures. We describe the secure version of three state-of-the-art LDP protocols and empirically show they cause acceptable overheads according to several parameters.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.