Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

ShadowNet: A Secure and Efficient On-device Model Inference System for Convolutional Neural Networks (2011.05905v4)

Published 11 Nov 2020 in cs.CR and cs.LG

Abstract: With the increased usage of AI accelerators on mobile and edge devices, on-device ML is gaining popularity. Thousands of proprietary ML models are being deployed today on billions of untrusted devices. This raises serious security concerns about model privacy. However, protecting model privacy without losing access to the untrusted AI accelerators is a challenging problem. In this paper, we present a novel on-device model inference system, ShadowNet. ShadowNet protects the model privacy with Trusted Execution Environment (TEE) while securely outsourcing the heavy linear layers of the model to the untrusted hardware accelerators. ShadowNet achieves this by transforming the weights of the linear layers before outsourcing them and restoring the results inside the TEE. The non-linear layers are also kept secure inside the TEE. ShadowNet's design ensures efficient transformation of the weights and the subsequent restoration of the results. We build a ShadowNet prototype based on TensorFlow Lite and evaluate it on five popular CNNs, namely, MobileNet, ResNet-44, MiniVGG, ResNet-404, and YOLOv4-tiny. Our evaluation shows that ShadowNet achieves strong security guarantees with reasonable performance, offering a practical solution for secure on-device model inference.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Zhichuang Sun (4 papers)
  2. Ruimin Sun (7 papers)
  3. Changming Liu (2 papers)
  4. Amrita Roy Chowdhury (18 papers)
  5. Long Lu (15 papers)
  6. Somesh Jha (112 papers)
Citations (15)

Summary

We haven't generated a summary for this paper yet.