Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Chameleon: Increasing Label-Only Membership Leakage with Adaptive Poisoning (2310.03838v2)

Published 5 Oct 2023 in cs.LG

Abstract: The integration of ML in numerous critical applications introduces a range of privacy concerns for individuals who provide their datasets for model training. One such privacy risk is Membership Inference (MI), in which an attacker seeks to determine whether a particular data sample was included in the training dataset of a model. Current state-of-the-art MI attacks capitalize on access to the model's predicted confidence scores to successfully perform membership inference, and employ data poisoning to further enhance their effectiveness. In this work, we focus on the less explored and more realistic label-only setting, where the model provides only the predicted label on a queried sample. We show that existing label-only MI attacks are ineffective at inferring membership in the low False Positive Rate (FPR) regime. To address this challenge, we propose a new attack Chameleon that leverages a novel adaptive data poisoning strategy and an efficient query selection method to achieve significantly more accurate membership inference than existing label-only attacks, especially at low FPRs.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (34)
  1. Deep learning with differential privacy. In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, 2016.
  2. Reconstructing training data with informed adversaries. In IEEE Symposium on Security and Privacy (SP), 2022.
  3. Scalable membership inference attacks via quantile regression. In Advances in Neural Information Processing Systems, 2023.
  4. Decision-based adversarial attacks: Reliable attacks against black-box machine learning models. In The Sixth International Conference on Learning Representations, 2018.
  5. N. Carlini and D. Wagner. Towards evaluating the robustness of neural networks. In 2017 IEEE Symposium on Security and Privacy (SP), 2017.
  6. Extracting training data from large language models. In 30th USENIX Security Symposium (USENIX Security 21), 2021.
  7. Membership inference attacks from first principles. In IEEE Symposium on Security and Privacy (SP), 2022.
  8. Snap: Efficient extraction of private properties with poisoning. In 2023 IEEE Symposium on Security and Privacy (SP), pages 400–417, 2023. doi: 10.1109/SP46215.2023.10179334.
  9. Hopskipjumpattack: A query-efficient decision-based attack. In 2020 IEEE Symposium on Security and Privacy (SP), 2020.
  10. Amplifying membership exposure via data poisoning. In Advances in Neural Information Processing Systems, 2022.
  11. Label-only membership inference attacks. In Proceedings of the 38th International Conference on Machine Learning, 2021.
  12. Unlocking high-accuracy differentially private image classification through scale, 2022.
  13. Model inversion attacks that exploit confidence information and basic countermeasures. In Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security. Association for Computing Machinery, 2015.
  14. Property inference attacks on fully connected neural networks using permutation invariant representations. In Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security, 2018.
  15. Reconstructing training data from trained neural networks. In Advances in Neural Information Processing Systems, 2022.
  16. Resolving individuals contributing trace amounts of DNA to highly complex mixtures using high-density SNP genotyping microarrays. PLoS genetics, 4(8):e1000167, 2008.
  17. Toward training at imagenet scale with differential privacy. 2022.
  18. K. Leino and M. Fredrikson. Stolen memories: Leveraging model memorization for calibrated {{\{{White-Box}}\}} membership inference. In 29th USENIX security symposium (USENIX Security 20), pages 1605–1622, 2020.
  19. Z. Li and Y. Zhang. Membership leakage in label-only exposures. In Proceedings of the 2021 ACM SIGSAC Conference on Computer and Communications Security, 2021.
  20. Membership inference attacks by exploiting loss trajectory. In Proceedings of the 2022 ACM SIGSAC Conference on Computer and Communications Security, 2022.
  21. Property inference from poisoning. In 2022 IEEE Symposium on Security and Privacy (SP), pages 1120–1137, 2022. doi: 10.1109/SP46214.2022.9833623.
  22. Are your sensitive attributes private? novel model inversion attribute inference attacks on classification models. In 31st USENIX Security Symposium (USENIX Security 22), 2022.
  23. Comprehensive privacy analysis of deep learning. In Proceedings of the 2019 IEEE Symposium on Security and Privacy (SP), pages 1–15, 2018.
  24. The application of data mining techniques in financial fraud detection: A classification framework and an academic review of literature. Decision Support Systems, 2011.
  25. White-box vs black-box: Bayes optimal strategies for membership inference. In Proceedings of the 36th International Conference on Machine Learning, 2019.
  26. Membership inference attacks against machine learning models. In 2017 IEEE symposium on security and privacy (SP), pages 3–18. IEEE, 2017.
  27. K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. In Y. Bengio and Y. LeCun, editors, 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings, 2015. URL http://arxiv.org/abs/1409.1556.
  28. A systematic literature review of automated clinical coding and classification systems. volume 17, pages 646–651. BMJ Group BMA House, Tavistock Square, London, WC1H 9JR, 2010.
  29. A. Suri and D. Evans. Formalizing and estimating distribution inference risks. 2022.
  30. Truth serum: Poisoning machine learning models to reveal their secrets. In Proceedings of the 2022 ACM SIGSAC Conference on Computer and Communications Security, 2022.
  31. Canary in a coalmine: Better membership inference with ensembled adversarial queries. In The Eleventh International Conference on Learning Representations, 2023.
  32. Enhanced membership inference attacks against machine learning models. In Proceedings of the 2022 ACM SIGSAC Conference on Computer and Communications Security, 2022.
  33. Privacy risk in machine learning: Analyzing the connection to overfitting. In IEEE 31st Computer Security Foundations Symposium (CSF), 2018.
  34. Opacus: User-friendly differential privacy library in PyTorch. arXiv preprint arXiv:2109.12298, 2021.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Harsh Chaudhari (13 papers)
  2. Giorgio Severi (11 papers)
  3. Alina Oprea (56 papers)
  4. Jonathan Ullman (71 papers)
Citations (4)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com