Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
166 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Vicious Classifiers: Assessing Inference-time Data Reconstruction Risk in Edge Computing (2212.04223v3)

Published 8 Dec 2022 in cs.LG, cs.CR, cs.IT, and math.IT

Abstract: Privacy-preserving inference in edge computing paradigms encourages the users of machine-learning services to locally run a model on their private input and only share the models outputs for a target task with the server. We study how a vicious server can reconstruct the input data by observing only the models outputs while keeping the target accuracy very close to that of a honest server by jointly training a target model (to run at users' side) and an attack model for data reconstruction (to secretly use at servers' side). We present a new measure to assess the inference-time reconstruction risk. Evaluations on six benchmark datasets show the model's input can be approximately reconstructed from the outputs of a single inference. We propose a primary defense mechanism to distinguish vicious versus honest classifiers at inference time. By studying such a risk associated with emerging ML services our work has implications for enhancing privacy in edge computing. We discuss open challenges and directions for future studies and release our code as a benchmark for the community at https://github.com/mmalekzadeh/vicious-classifiers .

Definition Search Book Streamline Icon: https://streamlinehq.com
References (43)
  1. Arthur T Benjamin and Jennifer J Quinn. 2003. Proofs that Really Count: the Art of Combinatorial Proof. Number 27. MAA.
  2. Poisoning Attacks against Support Vector Machines. In International Conference on Machine Learning.
  3. Fast Homomorphic Evaluation of Deep Discretized Neural Networks. In Annual International Cryptology Conference.
  4. Stephen Boyd and Lieven Vandenberghe. 2004. Convex Optimization. Cambridge University Press.
  5. The Secret Sharer: Evaluating and Testing Unintended Memorization In Neural Networks. In USENIX Security Symposium.
  6. Rich Caruana. 1997. Multitask Learning. Springer Machine Learning 28, 1 (1997).
  7. Michael Crawshaw. 2020. Multi-Task Learning with Deep Neural Networks: A Survey. arXiv:2009.09796 (2020).
  8. The Algorithmic Foundations of Differential Privacy. Foundations and Trends in Theoretical Computer Science 9, 3-4 (2014).
  9. Model Inversion Attacks that Exploit Confidence Information and Basic Countermeasures. In ACM SIGSAC Conference on Computer and Communications Security.
  10. GDPR. 2018. Data Protection and Online Privacy. https://europa.eu/youreurope/citizens/consumers/internet-telecoms/data-protection-online-privacy/. Accessed: 2021-07-01.
  11. Inverting gradients-how easy is it to break privacy in federated learning? Advances in Neural Information Processing Systems 33 (2020).
  12. Homomorphic Encryption from Learning with Errors: Conceptually-Simpler, Asymptotically-Faster, Attribute-Based. In Springer Annual Cryptology Conference.
  13. The Elements of Statistical Learning: Data mining, Inference, and Prediction. Vol. 2. Springer.
  14. Deep Residual Learning for Image Recognition. In IEEE/CVF Conference on Computer Vision and Pattern Recognition.
  15. Manipulating Machine Learning: Poisoning Attacks and Countermeasures for Regression Learning. In IEEE Symposium on Security and Privacy.
  16. Comprehensive Analysis of Privacy Leakage in Vertical Federated Learning During Prediction. Proc. Priv. Enhancing Technol. (PETS) 2 (2022), 263–281.
  17. Michael Kearns and Aaron Roth. 2019. The Ethical Algorithm: The Science of Socially Aware Algorithm Design. Oxford University Press.
  18. Diederik P Kingma and Jimmy Ba. 2014. Adam: A Method for Stochastic Optimization. In International Conference on Learning Representations.
  19. Learning Multiple Layers of Features from Tiny Images. (2009).
  20. Ya Le and Xuan Yang. 2015. Tiny Imagenet Visual Recognition Challenge. CS 231N (2015).
  21. Yann LeCun. 1998. The MNIST database of handwritten digits. http://yann. lecun. com/exdb/mnist/ (1998).
  22. Deep Learning Face Attributes in the Wild. In International Conference on Computer Vision.
  23. Feature Inference Attack on Model Predictions in Vertical Federated Learning. In IEEE International Conference on Data Engineering (ICDE).
  24. Honest-but-Curious Nets: Sensitive Attributes of Private Inputs Can Be Secretly Coded into the Classifiers’ Outputs. In ACM SIGSAC Conference on Computer and Communications Security.
  25. Andrey Malinin and Mark Gales. 2018. Predictive Uncertainty Estimation via Prior Networks. Advances in neural information processing systems (2018).
  26. Exploiting Unintended Feature Leakage in Collaborative Learning. In IEEE Symposium on Security and Privacy (S&P).
  27. Towards Poisoning of Deep Learning Algorithms with Back-Gradient Optimization. In ACM Workshop on Artificial Intelligence and Security (AISec).
  28. Kevin P Murphy. 2021. Probabilistic Machine Learning: An Introduction. MIT Press.
  29. PyTorch: An Imperative Style, High-Performance Deep Learning Library. In Advances in Neural Information Processing Systems.
  30. Ml-Leaks: Model and Data Independent Membership Inference Attacks and Defenses on Machine Learning Models. In Network and Distributed Systems Security.
  31. Membership Inference Attacks Against Machine Learning Models. In IEEE Symposium on Security and Privacy (S&P).
  32. Congzheng Song and Vitaly Shmatikov. 2020. Overlearning Reveals Sensitive Attributes. In Conference on Learning Representations.
  33. Distributed Deep Neural Networks over the Cloud, the Edge and End Devices. In International Conference on Distributed Computing Systems. IEEE.
  34. Image Quality Assessment: from Error Visibility to Structural Similarity. IEEE Transactions on Image Processing 13, 4 (2004).
  35. Fashion-MNIST: A Novel Image Dataset for Benchmarking Machine Learning Algorithms. arXiv:1708.07747 (2017).
  36. See through gradients: Image batch recovery via gradinversion. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 16337–16346.
  37. Jihyeong Yoo and Qifeng Chen. 2021. SinIR: Efficient General Image Manipulation with Single Image Reconstruction. In International Conference on Machine Learning.
  38. Sergey Zagoruyko and Nikos Komodakis. 2016. Wide Residual Networks. In BMVC.
  39. Yu Zhang and Qiang Yang. 2021. A Survey on Multi-Task Learning. IEEE Transactions on Knowledge and Data Engineering (2021).
  40. Age Progression/Regression by Conditional Adversarial Autoencoder. In IEEE/CVF Conference on Computer Vision and Pattern Recognition.
  41. Loss Functions for Image Restoration with Neural Networks. IEEE Transactions on computational imaging 3, 1 (2016).
  42. Edge Intelligence: Paving the Last Mile of Artificial Intelligence with Edge Computing. Proc. IEEE 107, 8 (2019).
  43. Deep Leakage From Gradients. In Advances in Neural Information Processing Systems.

Summary

We haven't generated a summary for this paper yet.

Github Logo Streamline Icon: https://streamlinehq.com