SODA: Protecting Proprietary Information in On-Device Machine Learning Models (2312.15036v1)
Abstract: The growth of low-end hardware has led to a proliferation of machine learning-based services in edge applications. These applications gather contextual information about users and provide some services, such as personalized offers, through a ML model. A growing practice has been to deploy such ML models on the user's device to reduce latency, maintain user privacy, and minimize continuous reliance on a centralized source. However, deploying ML models on the user's edge device can leak proprietary information about the service provider. In this work, we investigate on-device ML models that are used to provide mobile services and demonstrate how simple attacks can leak proprietary information of the service provider. We show that different adversaries can easily exploit such models to maximize their profit and accomplish content theft. Motivated by the need to thwart such attacks, we present an end-to-end framework, SODA, for deploying and serving on edge devices while defending against adversarial usage. Our results demonstrate that SODA can detect adversarial usage with 89% accuracy in less than 50 queries with minimal impact on service performance, latency, and storage.
- 2017. Apple’s ‘Neural Engine’ Infuses the iPhone With AI Smarts. www.wired.com/story/apples-neural-engine-infuses-the-iphone-with-ai-smarts/.
- 2019. Intel Vision Accelerator Design With Intel Movidius Vision Processing Unit (VPU). https://software.intel.com/en-us/iot/hardware/vision-accelerator-movidius-vpu#specifications.
- 2019. The NVIDIA EGX Platform for Edge Computing. https://www.nvidia.com/en-us/data-center/products/egx-edge-computing/.
- 2019. Open Neural Network Exchange. https://onnx.ai/. Accessed on 01/18/2022.
- A public domain dataset for human activity recognition using smartphones.. In Esann.
- Impact of response latency on user behavior in web search. In Proceedings of the 37th international ACM SIGIR conference on Research & development in information retrieval.
- Hacking smart machines with smarter ones: How to extract meaningful data from machine learning classifiers. International Journal of Security and Networks (2015).
- Preserving Privacy in Personalized Models for Distributed Mobile Services. IEEE International Conference on Distributed Computing Systems (2021).
- Copycat cnn: Stealing knowledge by persuading confession with random non-labeled data. In International Joint Conference on Neural Networks (IJCNN). IEEE.
- Google Developers. 2022. Why On-Device Machine Learning? https://developers.google.com/learn/topics/on-device-ml/learn-more.
- Consumers’ privacy concerns and implications for a privacy preserving Smart Grid architecture—Results of an Austrian study. Energy Research & Social Science (2015).
- Model inversion attacks that exploit confidence information and basic countermeasures. In ACM SIGSAC Conference on Computer and Communications Security.
- Property inference attacks on fully connected neural networks using permutation invariant representations. In Proceedings of the ACM SIGSAC Conference on CCS.
- On the (statistical) detection of adversarial examples. arXiv preprint arXiv:1702.06280 (2017).
- PRADA: Protecting Against DNN Model Stealing Attacks. In IEEE European Symposium on Security and Privacy (EuroS&P). IEEE.
- Maze: Data-free model stealing attack using zeroth-order gradient estimation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.
- Protecting dnns from theft using an ensemble of diverse models. In International Conference on Learning Representations.
- Sanjay Kariyappa and Moinuddin K Qureshi. 2020. Defending against model stealing attacks with adaptive misinformation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.
- Model extraction warning in MLaaS paradigm. In Proceedings of the 34th Annual Computer Security Applications Conference.
- Yann LeCun and Corinna Cortes. 2010. The MNIST Database of Handwritten Digits. http://yann.lecun.com/exdb/mnist/.
- Defending against neural network model stealing attacks using deceptive perturbations. In IEEE Security and Privacy Workshops (SPW). IEEE.
- Oblivious neural network predictions via minionn transformations. In Proceedings of the ACM SIGSAC Conference on Computer and Communications Security.
- Dongyu Meng and Hao Chen. 2017. Magnet: a two-pronged defense against adversarial examples. In Proceedings of the ACM SIGSAC conference on computer and communications security.
- Shredder: Learning noise distributions to protect inference privacy. In Proceedings of the International Conference on Architectural Support for Programming Languages and Operating Systems.
- Towards reverse-engineering black-box neural networks. In Proceedings of the International Conference on Learning Representations.
- Knockoff nets: Stealing functionality of black-box models. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition.
- Prediction poisoning: Towards defenses against DNN model stealing attacks. arXiv preprint arXiv:1906.10908 (2019).
- Semi-supervised knowledge transfer for deep learning from private training data. Proceedings of the International Conference on Learning Representations (2017).
- Practical black-box attacks against machine learning. In Proceedings of the ACM on Asia Conference on CCS.
- Distillation as a defense to adversarial perturbations against deep neural networks. In IEEE Symposium on Security and Privacy (S&P). IEEE.
- Forgotten siblings: Unifying attacks on machine learning and digital watermarking. In IEEE European symposium on security and privacy (EuroS&P). IEEE.
- Membership inference attacks against machine learning models. In IEEE Symposium on Security and Privacy.
- Stealing machine learning models via prediction apis. In Proceedings of the USENIX Security Symposium.
- Efficient transfer learning schemes for personalized language modeling using recurrent neural network. AAAI Workshop on Crowdsourcing, Deep Learning, and Artificial Intelligence Agents (2017).
- Chong Zhou and Randy C Paffenroth. 2017. Anomaly detection with robust deep autoencoders. In Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining.