Vicious Classifiers: Assessing Inference-time Data Reconstruction Risk in Edge Computing (2212.04223v3)
Abstract: Privacy-preserving inference in edge computing paradigms encourages the users of machine-learning services to locally run a model on their private input and only share the models outputs for a target task with the server. We study how a vicious server can reconstruct the input data by observing only the models outputs while keeping the target accuracy very close to that of a honest server by jointly training a target model (to run at users' side) and an attack model for data reconstruction (to secretly use at servers' side). We present a new measure to assess the inference-time reconstruction risk. Evaluations on six benchmark datasets show the model's input can be approximately reconstructed from the outputs of a single inference. We propose a primary defense mechanism to distinguish vicious versus honest classifiers at inference time. By studying such a risk associated with emerging ML services our work has implications for enhancing privacy in edge computing. We discuss open challenges and directions for future studies and release our code as a benchmark for the community at https://github.com/mmalekzadeh/vicious-classifiers .
- Arthur T Benjamin and Jennifer J Quinn. 2003. Proofs that Really Count: the Art of Combinatorial Proof. Number 27. MAA.
- Poisoning Attacks against Support Vector Machines. In International Conference on Machine Learning.
- Fast Homomorphic Evaluation of Deep Discretized Neural Networks. In Annual International Cryptology Conference.
- Stephen Boyd and Lieven Vandenberghe. 2004. Convex Optimization. Cambridge University Press.
- The Secret Sharer: Evaluating and Testing Unintended Memorization In Neural Networks. In USENIX Security Symposium.
- Rich Caruana. 1997. Multitask Learning. Springer Machine Learning 28, 1 (1997).
- Michael Crawshaw. 2020. Multi-Task Learning with Deep Neural Networks: A Survey. arXiv:2009.09796 (2020).
- The Algorithmic Foundations of Differential Privacy. Foundations and Trends in Theoretical Computer Science 9, 3-4 (2014).
- Model Inversion Attacks that Exploit Confidence Information and Basic Countermeasures. In ACM SIGSAC Conference on Computer and Communications Security.
- GDPR. 2018. Data Protection and Online Privacy. https://europa.eu/youreurope/citizens/consumers/internet-telecoms/data-protection-online-privacy/. Accessed: 2021-07-01.
- Inverting gradients-how easy is it to break privacy in federated learning? Advances in Neural Information Processing Systems 33 (2020).
- Homomorphic Encryption from Learning with Errors: Conceptually-Simpler, Asymptotically-Faster, Attribute-Based. In Springer Annual Cryptology Conference.
- The Elements of Statistical Learning: Data mining, Inference, and Prediction. Vol. 2. Springer.
- Deep Residual Learning for Image Recognition. In IEEE/CVF Conference on Computer Vision and Pattern Recognition.
- Manipulating Machine Learning: Poisoning Attacks and Countermeasures for Regression Learning. In IEEE Symposium on Security and Privacy.
- Comprehensive Analysis of Privacy Leakage in Vertical Federated Learning During Prediction. Proc. Priv. Enhancing Technol. (PETS) 2 (2022), 263–281.
- Michael Kearns and Aaron Roth. 2019. The Ethical Algorithm: The Science of Socially Aware Algorithm Design. Oxford University Press.
- Diederik P Kingma and Jimmy Ba. 2014. Adam: A Method for Stochastic Optimization. In International Conference on Learning Representations.
- Learning Multiple Layers of Features from Tiny Images. (2009).
- Ya Le and Xuan Yang. 2015. Tiny Imagenet Visual Recognition Challenge. CS 231N (2015).
- Yann LeCun. 1998. The MNIST database of handwritten digits. http://yann. lecun. com/exdb/mnist/ (1998).
- Deep Learning Face Attributes in the Wild. In International Conference on Computer Vision.
- Feature Inference Attack on Model Predictions in Vertical Federated Learning. In IEEE International Conference on Data Engineering (ICDE).
- Honest-but-Curious Nets: Sensitive Attributes of Private Inputs Can Be Secretly Coded into the Classifiers’ Outputs. In ACM SIGSAC Conference on Computer and Communications Security.
- Andrey Malinin and Mark Gales. 2018. Predictive Uncertainty Estimation via Prior Networks. Advances in neural information processing systems (2018).
- Exploiting Unintended Feature Leakage in Collaborative Learning. In IEEE Symposium on Security and Privacy (S&P).
- Towards Poisoning of Deep Learning Algorithms with Back-Gradient Optimization. In ACM Workshop on Artificial Intelligence and Security (AISec).
- Kevin P Murphy. 2021. Probabilistic Machine Learning: An Introduction. MIT Press.
- PyTorch: An Imperative Style, High-Performance Deep Learning Library. In Advances in Neural Information Processing Systems.
- Ml-Leaks: Model and Data Independent Membership Inference Attacks and Defenses on Machine Learning Models. In Network and Distributed Systems Security.
- Membership Inference Attacks Against Machine Learning Models. In IEEE Symposium on Security and Privacy (S&P).
- Congzheng Song and Vitaly Shmatikov. 2020. Overlearning Reveals Sensitive Attributes. In Conference on Learning Representations.
- Distributed Deep Neural Networks over the Cloud, the Edge and End Devices. In International Conference on Distributed Computing Systems. IEEE.
- Image Quality Assessment: from Error Visibility to Structural Similarity. IEEE Transactions on Image Processing 13, 4 (2004).
- Fashion-MNIST: A Novel Image Dataset for Benchmarking Machine Learning Algorithms. arXiv:1708.07747 (2017).
- See through gradients: Image batch recovery via gradinversion. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 16337–16346.
- Jihyeong Yoo and Qifeng Chen. 2021. SinIR: Efficient General Image Manipulation with Single Image Reconstruction. In International Conference on Machine Learning.
- Sergey Zagoruyko and Nikos Komodakis. 2016. Wide Residual Networks. In BMVC.
- Yu Zhang and Qiang Yang. 2021. A Survey on Multi-Task Learning. IEEE Transactions on Knowledge and Data Engineering (2021).
- Age Progression/Regression by Conditional Adversarial Autoencoder. In IEEE/CVF Conference on Computer Vision and Pattern Recognition.
- Loss Functions for Image Restoration with Neural Networks. IEEE Transactions on computational imaging 3, 1 (2016).
- Edge Intelligence: Paving the Last Mile of Artificial Intelligence with Edge Computing. Proc. IEEE 107, 8 (2019).
- Deep Leakage From Gradients. In Advances in Neural Information Processing Systems.