Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 134 tok/s
Gemini 2.5 Pro 41 tok/s Pro
GPT-5 Medium 27 tok/s Pro
GPT-5 High 22 tok/s Pro
GPT-4o 84 tok/s Pro
Kimi K2 195 tok/s Pro
GPT OSS 120B 433 tok/s Pro
Claude Sonnet 4.5 37 tok/s Pro
2000 character limit reached

On the Discredibility of Membership Inference Attacks (2212.02701v2)

Published 6 Dec 2022 in cs.CR and cs.AI

Abstract: With the wide-spread application of machine learning models, it has become critical to study the potential data leakage of models trained on sensitive data. Recently, various membership inference (MI) attacks are proposed to determine if a sample was part of the training set or not. The question is whether these attacks can be reliably used in practice. We show that MI models frequently misclassify neighboring nonmember samples of a member sample as members. In other words, they have a high false positive rate on the subpopulations of the exact member samples that they can identify. We then showcase a practical application of MI attacks where this issue has a real-world repercussion. Here, MI attacks are used by an external auditor (investigator) to show to a judge/jury that an auditee unlawfully used sensitive data. Due to the high false positive rate of MI attacks on member's subpopulations, auditee challenges the credibility of the auditor by revealing the performance of the MI attacks on these subpopulations. We argue that current membership inference attacks can identify memorized subpopulations, but they cannot reliably identify which exact sample in the subpopulation was used during the training.

Citations (1)

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (2)

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.