Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

An Accuracy-Lossless Perturbation Method for Defending Privacy Attacks in Federated Learning (2002.09843v5)

Published 23 Feb 2020 in cs.LG, cs.CR, cs.CV, and stat.ML

Abstract: Although federated learning improves privacy of training data by exchanging local gradients or parameters rather than raw data, the adversary still can leverage local gradients and parameters to obtain local training data by launching reconstruction and membership inference attacks. To defend such privacy attacks, many noises perturbation methods (like differential privacy or CountSketch matrix) have been widely designed. However, the strong defence ability and high learning accuracy of these schemes cannot be ensured at the same time, which will impede the wide application of FL in practice (especially for medical or financial institutions that require both high accuracy and strong privacy guarantee). To overcome this issue, in this paper, we propose \emph{an efficient model perturbation method for federated learning} to defend reconstruction and membership inference attacks launched by curious clients. On the one hand, similar to the differential privacy, our method also selects random numbers as perturbed noises added to the global model parameters, and thus it is very efficient and easy to be integrated in practice. Meanwhile, the random selected noises are positive real numbers and the corresponding value can be arbitrarily large, and thus the strong defence ability can be ensured. On the other hand, unlike differential privacy or other perturbation methods that cannot eliminate the added noises, our method allows the server to recover the true gradients by eliminating the added noises. Therefore, our method does not hinder learning accuracy at all.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Xue Yang (141 papers)
  2. Yan Feng (82 papers)
  3. Weijun Fang (21 papers)
  4. Jun Shao (27 papers)
  5. Xiaohu Tang (96 papers)
  6. Shu-Tao Xia (171 papers)
  7. Rongxing Lu (21 papers)
Citations (37)

Summary

We haven't generated a summary for this paper yet.