Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Privacy Amplification via Shuffling for Linear Contextual Bandits (2112.06008v1)

Published 11 Dec 2021 in cs.LG

Abstract: Contextual bandit algorithms are widely used in domains where it is desirable to provide a personalized service by leveraging contextual information, that may contain sensitive information that needs to be protected. Inspired by this scenario, we study the contextual linear bandit problem with differential privacy (DP) constraints. While the literature has focused on either centralized (joint DP) or local (local DP) privacy, we consider the shuffle model of privacy and we show that is possible to achieve a privacy/utility trade-off between JDP and LDP. By leveraging shuffling from privacy and batching from bandits, we present an algorithm with regret bound $\widetilde{\mathcal{O}}(T{2/3}/\varepsilon{1/3})$, while guaranteeing both central (joint) and local privacy. Our result shows that it is possible to obtain a trade-off between JDP and LDP by leveraging the shuffle model while preserving local privacy.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Evrard Garcelon (13 papers)
  2. Kamalika Chaudhuri (122 papers)
  3. Vianney Perchet (91 papers)
  4. Matteo Pirotta (45 papers)
Citations (16)

Summary

We haven't generated a summary for this paper yet.