On the Growth of Mistakes in Differentially Private Online Learning: A Lower Bound Perspective (2402.16778v3)
Abstract: In this paper, we provide lower bounds for Differentially Private (DP) Online Learning algorithms. Our result shows that, for a broad class of $(\varepsilon,\delta)$-DP online algorithms, for number of rounds $T$ such that $\log T\leq O(1 / \delta)$, the expected number of mistakes incurred by the algorithm grows as $\Omega(\log \frac{T}{\delta})$. This matches the upper bound obtained by Golowich and Livni (2021) and is in contrast to non-private online learning where the number of mistakes is independent of $T$. To the best of our knowledge, our work is the first result towards settling lower bounds for DP-Online learning and partially addresses the open question in Sanyal and Ramponi (2022).
- Private pac learning implies finite littlestone dimension. In Symposium on Theory of Computing (STOC), 2019.
- Private and online learnability are equivalent. ACM Journal of the ACM (JACM), 2022.
- Private empirical risk minimization: Efficient algorithms and tight error bounds. In Symposium on foundations of computer science (FOCS), 2014.
- Characterizing the sample complexity of private learners. In Conference on Innovations in Theoretical Computer Science (ITCS), 2013a.
- Private learning and sanitization: Pure vs. approximate differential privacy. In International Workshop on Approximation Algorithms for Combinatorial Optimization, 2013b.
- Bounds on the sample complexity for private learning and private data release. Machine learning, 2014.
- Agnostic online learning. In Conference On Learning Theory (COLT), 2009.
- Learnability and the vapnik-chervonenkis dimension. Journal of the ACM (JACM), 1989.
- N. Cesa-Bianchi and G. Lugosi. Prediction, learning, and games. Cambridge university press, 2006.
- Private and continual release of statistics. Transactions on Information and System Security (TISSEC), 2011.
- Differentially private empirical risk minimization. Journal of Machine Learning Research (JMLR), 2011.
- Distribution-independent pac learning of halfspaces with massart noise. Conference on Neural Information Processing Systems (NeurIPS), 2019.
- C. Dwork and V. Feldman. Privacy-preserving prediction. In Conference On Learning Theory (COLT), 2018.
- Calibrating noise to sensitivity in private data analysis. In Theory of Cryptography (TCC), 2006.
- Differential privacy under continual observation. In Symposium on Theory of computing (STOC), 2010a.
- Boosting and differential privacy. In Symposium on Foundations of Computer Science (FOCS, 2010b.
- V. Feldman and D. Xiao. Sample complexity bounds on differentially private learning via communication complexity. In Conference on Learning Theory (COLT), 2014.
- Sample-efficient proper pac learning with approximate differential privacy. In Symposium on Theory of Computing (STOC), 2021.
- N. Golowich and R. Livni. Littlestone classes are privately online learnable. Conference on Neural Information Processing Systems (NeurIPS), 2021.
- Online learning with simple predictors and a combinatorial characterization of minimax in 0/1 games. In Conference on Learning Theory (COLT), 2021.
- The price of differential privacy under continual observation. In International Conference on Machine Learning (ICML), 2023.
- Black-box differential privacy for interactive ml. In Conference on Neural Information Processing Systems (NeurIPS), 2023.
- What can we learn privately? SIAM Journal on Computing, 2011.
- Robust mediators in large games. arXiv:1512.02698, 2015.
- N. Littlestone. Learning quickly when irrelevant attributes abound: A new linear-threshold algorithm. Machine learning, 1988.
- Private everlasting prediction. In Conference on Neural Information Processing Systems (NeurIPS), 2023.
- A. Sanyal and G. Ramponi. Open problem: Do you pay for privacy in online learning? In Conference on Learning Theory (COLT), 2022.
- L. G. Valiant. A theory of the learnable. Communications of the ACM, 1984.
Collections
Sign up for free to add this paper to one or more collections.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.