Emergent Mind

Abstract

In many systems privacy of users depends on the number of participants applying collectively some method to protect their security. Indeed, there are numerous already classic results about revealing aggregated data from a set of users. The conclusion is usually as follows: if you have enough friends to "aggregate" the private data, you can safely reveal your private information. Apart from data aggregation, it has been noticed that in a wider context privacy can be often reduced to being hidden in a crowd. Generally, the problems is how to create such crowd. This task may be not easy in some distributed systems, wherein gathering enough "individuals" is hard for practical reasons. Such example are social networks (or similar systems), where users have only a limited number of semi trusted contacts and their aim is to reveal some aggregated data in a privacy preserving manner. This may be particularly problematic in the presence of a strong adversary that can additionally corrupt some users. We show two methods that allow to significantly amplify privacy with only limited number of local operations and very moderate communication overhead. Except theoretical analysis we show experimental results on topologies of real-life social networks to demonstrate that our methods can significantly amplify privacy of chosen aggregation protocols even facing a massive attack of a powerful adversary. We believe however that our results can have much wider applications for improving security of systems based on locally trusted relations.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.