Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 48 tok/s
Gemini 2.5 Pro 48 tok/s Pro
GPT-5 Medium 26 tok/s Pro
GPT-5 High 19 tok/s Pro
GPT-4o 107 tok/s Pro
Kimi K2 205 tok/s Pro
GPT OSS 120B 473 tok/s Pro
Claude Sonnet 4 37 tok/s Pro
2000 character limit reached

Differentially Private and Adversarially Robust Machine Learning: An Empirical Evaluation (2401.10405v1)

Published 18 Jan 2024 in cs.LG

Abstract: Malicious adversaries can attack machine learning models to infer sensitive information or damage the system by launching a series of evasion attacks. Although various work addresses privacy and security concerns, they focus on individual defenses, but in practice, models may undergo simultaneous attacks. This study explores the combination of adversarial training and differentially private training to defend against simultaneous attacks. While differentially-private adversarial training, as presented in DP-Adv, outperforms the other state-of-the-art methods in performance, it lacks formal privacy guarantees and empirical validation. Thus, in this work, we benchmark the performance of this technique using a membership inference attack and empirically show that the resulting approach is as private as non-robust private models. This work also highlights the need to explore privacy guarantees in dynamic training paradigms.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (20)
  1. Deep learning with differential privacy. In Proceedings of the 2016 ACM SIGSAC conference on computer and communications security, 308–318.
  2. Anonymous ICLR22 Reviewers. 2022. Review of ICLR22 submitted manuscript: Practical Adversarial Training with Differential Privacy for Deep Learning. https://openreview.net/forum?id=1hw-h1C8bch.
  3. Practical Adversarial Training with Differential Privacy for Deep Learning.
  4. Differentially private empirical risk minimization. Journal of Machine Learning Research, 12(3).
  5. Deng, L. 2012. The mnist database of handwritten digit images for machine learning research [best of the web]. IEEE signal processing magazine, 29(6): 141–142.
  6. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572.
  7. Private convex empirical risk minimization and high-dimensional regression. In Conference on Learning Theory, 25–1. JMLR Workshop and Conference Proceedings.
  8. Learning multiple layers of features from tiny images.
  9. Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083.
  10. Scalable differential privacy with certified robustness in adversarial learning. In International Conference on Machine Learning, 7683–7694. PMLR.
  11. Overfitting in adversarially robust deep learning. In International Conference on Machine Learning, 8093–8104. PMLR.
  12. Learning in a large function space: Privacy-preserving mechanisms for SVM learning. arXiv preprint arXiv:0911.5708.
  13. White-box vs black-box: Bayes optimal strategies for membership inference. In International Conference on Machine Learning, 5558–5567. PMLR.
  14. Membership inference attacks against machine learning models. In 2017 IEEE symposium on security and privacy (SP), 3–18. IEEE.
  15. Privacy risks of securing machine learning models against adversarial examples. In Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security, 241–257.
  16. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199.
  17. Stealing machine learning models via prediction APIs. In 25th USENIX security symposium (USENIX Security 16), 601–618.
  18. Robustness threats of differential privacy. arXiv preprint arXiv:2012.07828.
  19. Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms. arXiv preprint arXiv:1708.07747.
  20. Privacy risk in machine learning: Analyzing the connection to overfitting. In 2018 IEEE 31st computer security foundations symposium (CSF), 268–282. IEEE.

Summary

We haven't generated a summary for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Lightbulb On Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube