Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 43 tok/s
Gemini 2.5 Pro 49 tok/s Pro
GPT-5 Medium 17 tok/s Pro
GPT-5 High 19 tok/s Pro
GPT-4o 96 tok/s Pro
Kimi K2 197 tok/s Pro
GPT OSS 120B 455 tok/s Pro
Claude Sonnet 4 36 tok/s Pro
2000 character limit reached

On the Discredibility of Membership Inference Attacks (2212.02701v2)

Published 6 Dec 2022 in cs.CR and cs.AI

Abstract: With the wide-spread application of machine learning models, it has become critical to study the potential data leakage of models trained on sensitive data. Recently, various membership inference (MI) attacks are proposed to determine if a sample was part of the training set or not. The question is whether these attacks can be reliably used in practice. We show that MI models frequently misclassify neighboring nonmember samples of a member sample as members. In other words, they have a high false positive rate on the subpopulations of the exact member samples that they can identify. We then showcase a practical application of MI attacks where this issue has a real-world repercussion. Here, MI attacks are used by an external auditor (investigator) to show to a judge/jury that an auditee unlawfully used sensitive data. Due to the high false positive rate of MI attacks on member's subpopulations, auditee challenges the credibility of the auditor by revealing the performance of the MI attacks on these subpopulations. We argue that current membership inference attacks can identify memorized subpopulations, but they cannot reliably identify which exact sample in the subpopulation was used during the training.

Citations (1)
List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-Up Questions

We haven't generated follow-up questions for this paper yet.

Authors (2)