Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Primer: Fast Private Transformer Inference on Encrypted Data (2303.13679v1)

Published 23 Mar 2023 in cs.CR

Abstract: It is increasingly important to enable privacy-preserving inference for cloud services based on Transformers. Post-quantum cryptographic techniques, e.g., fully homomorphic encryption (FHE), and multi-party computation (MPC), are popular methods to support private Transformer inference. However, existing works still suffer from prohibitively computational and communicational overhead. In this work, we present, Primer, to enable a fast and accurate Transformer over encrypted data for natural language processing tasks. In particular, Primer is constructed by a hybrid cryptographic protocol optimized for attention-based Transformer models, as well as techniques including computation merge and tokens-first ciphertext packing. Comprehensive experiments on encrypted LLMing show that Primer achieves state-of-the-art accuracy and reduces the inference latency by 90.6% ~ 97.5% over previous methods.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Mengxin Zheng (17 papers)
  2. Qian Lou (40 papers)
  3. Lei Jiang (85 papers)
Citations (21)

Summary

We haven't generated a summary for this paper yet.