Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 43 tok/s
Gemini 2.5 Pro 49 tok/s Pro
GPT-5 Medium 18 tok/s Pro
GPT-5 High 16 tok/s Pro
GPT-4o 95 tok/s Pro
Kimi K2 198 tok/s Pro
GPT OSS 120B 464 tok/s Pro
Claude Sonnet 4 37 tok/s Pro
2000 character limit reached

Approximate Private Inference in Quantized Models (2305.03801v1)

Published 5 May 2023 in cs.IT and math.IT

Abstract: Private inference refers to a two-party setting in which one has a model (e.g., a linear classifier), the other has data, and the model is to be applied over the data while safeguarding the privacy of both parties. In particular, models in which the weights are quantized (e.g., to 1 or -1) gained increasing attention lately, due to their benefits in efficient, private, or robust computations. Traditionally, private inference has been studied from a cryptographic standpoint, which suffers from high complexity and degraded accuracy. More recently, Raviv et al. showed that in quantized models, an information theoretic tradeoff exists between the privacy of the parties, and a scheme based on a combination of Boolean and real-valued algebra was presented which attains that tradeoff. Both the scheme and the respective bound required the computation to be done exactly. In this work we show that by relaxing the requirement for exact computation, one can break the information theoretic privacy barrier of Raviv et al., and provide better privacy at the same communication costs. We provide a scheme for such approximate computation, bound its error, show its improved privacy, and devise a respective lower bound for some parameter regimes.

Citations (3)
List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-Up Questions

We haven't generated follow-up questions for this paper yet.