Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A Shuffling Framework for Local Differential Privacy (2106.06603v2)

Published 11 Jun 2021 in cs.LG and cs.CR

Abstract: ldp deployments are vulnerable to inference attacks as an adversary can link the noisy responses to their identity and subsequently, auxiliary information using the order of the data. An alternative model, shuffle DP, prevents this by shuffling the noisy responses uniformly at random. However, this limits the data learnability -- only symmetric functions (input order agnostic) can be learned. In this paper, we strike a balance and show that systematic shuffling of the noisy responses can thwart specific inference attacks while retaining some meaningful data learnability. To this end, we propose a novel privacy guarantee, d-sigma-privacy, that captures the privacy of the order of a data sequence. d-sigma-privacy allows tuning the granularity at which the ordinal information is maintained, which formalizes the degree the resistance to inference attacks trading it off with data learnability. Additionally, we propose a novel shuffling mechanism that can achieve \name-privacy and demonstrate the practicality of our mechanism via evaluation on real-world datasets.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Casey Meehan (9 papers)
  2. Amrita Roy Chowdhury (18 papers)
  3. Kamalika Chaudhuri (122 papers)
  4. Somesh Jha (112 papers)

Summary

We haven't generated a summary for this paper yet.