Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Private Online Learning via Lazy Algorithms (2406.03620v1)

Published 5 Jun 2024 in cs.LG, cs.CR, cs.DS, math.OC, and stat.ML

Abstract: We study the problem of private online learning, specifically, online prediction from experts (OPE) and online convex optimization (OCO). We propose a new transformation that transforms lazy online learning algorithms into private algorithms. We apply our transformation for differentially private OPE and OCO using existing lazy algorithms for these problems. Our final algorithms obtain regret, which significantly improves the regret in the high privacy regime $\varepsilon \ll 1$, obtaining $\sqrt{T \log d} + T{1/3} \log(d)/\varepsilon{2/3}$ for DP-OPE and $\sqrt{T} + T{1/3} \sqrt{d}/\varepsilon{2/3}$ for DP-OCO. We also complement our results with a lower bound for DP-OPE, showing that these rates are optimal for a natural family of low-switching private algorithms.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Hilal Asi (29 papers)
  2. Tomer Koren (79 papers)
  3. Daogao Liu (34 papers)
  4. Kunal Talwar (83 papers)

Summary

We haven't generated a summary for this paper yet.