Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 45 tok/s
Gemini 2.5 Pro 54 tok/s Pro
GPT-5 Medium 22 tok/s Pro
GPT-5 High 20 tok/s Pro
GPT-4o 99 tok/s Pro
Kimi K2 183 tok/s Pro
GPT OSS 120B 467 tok/s Pro
Claude Sonnet 4 38 tok/s Pro
2000 character limit reached

Generalization Techniques Empirically Outperform Differential Privacy against Membership Inference (2110.05524v1)

Published 11 Oct 2021 in cs.CR

Abstract: Differentially private training algorithms provide protection against one of the most popular attacks in machine learning: the membership inference attack. However, these privacy algorithms incur a loss of the model's classification accuracy, therefore creating a privacy-utility trade-off. The amount of noise that differential privacy requires to provide strong theoretical protection guarantees in deep learning typically renders the models unusable, but authors have observed that even lower noise levels provide acceptable empirical protection against existing membership inference attacks. In this work, we look for alternatives to differential privacy towards empirically protecting against membership inference attacks. We study the protection that simply following good machine learning practices (not designed with privacy in mind) offers against membership inference. We evaluate the performance of state-of-the-art techniques, such as pre-training and sharpness-aware minimization, alone and with differentially private training algorithms, and find that, when using early stopping, the algorithms without differential privacy can provide both higher utility and higher privacy than their differentially private counterparts. These findings challenge the belief that differential privacy is a good defense to protect against existing membership inference attacks

Citations (7)

Summary

We haven't generated a summary for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Lightbulb On Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube