Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 39 tok/s
Gemini 2.5 Pro 49 tok/s Pro
GPT-5 Medium 12 tok/s Pro
GPT-5 High 18 tok/s Pro
GPT-4o 91 tok/s Pro
Kimi K2 191 tok/s Pro
GPT OSS 120B 456 tok/s Pro
Claude Sonnet 4 37 tok/s Pro
2000 character limit reached

Data Poisoning Attacks Against Federated Learning Systems (2007.08432v2)

Published 16 Jul 2020 in cs.LG, cs.CR, and stat.ML

Abstract: Federated learning (FL) is an emerging paradigm for distributed training of large-scale deep neural networks in which participants' data remains on their own devices with only model updates being shared with a central server. However, the distributed nature of FL gives rise to new threats caused by potentially malicious participants. In this paper, we study targeted data poisoning attacks against FL systems in which a malicious subset of the participants aim to poison the global model by sending model updates derived from mislabeled data. We first demonstrate that such data poisoning attacks can cause substantial drops in classification accuracy and recall, even with a small percentage of malicious participants. We additionally show that the attacks can be targeted, i.e., they have a large negative impact only on classes that are under attack. We also study attack longevity in early/late round training, the impact of malicious participant availability, and the relationships between the two. Finally, we propose a defense strategy that can help identify malicious participants in FL to circumvent poisoning attacks, and demonstrate its effectiveness.

Citations (565)

Summary

  • The paper reveals that targeted data poisoning attacks can significantly reduce model accuracy even with only 2% malicious participants.
  • Experiments using CIFAR-10 and Fashion-MNIST show that late-round poisoning leads to lasting impacts on targeted classes.
  • The study proposes a detection strategy using PCA to differentiate malicious updates and strengthen defense in decentralized systems.

Analysis of Data Poisoning Attacks on Federated Learning Systems

The paper under review investigates vulnerabilities in Federated Learning (FL) systems, specifically focusing on data poisoning attacks. Federated Learning, a prominent decentralized training paradigm, aims to enhance privacy by retaining data on local devices while sharing only model updates with a central server. However, the distributed nature introduces susceptibility to malicious participants who might send poisoned updates to degrade the global model's performance.

Research Focus and Findings

This paper scrutinizes targeted data poisoning attacks, where a subset of participants, by utilizing mislabeled data, can significantly diminish the global model’s classification accuracy and recall. Through experiments with CIFAR-10 and Fashion-MNIST, the paper demonstrates the feasibility of such attacks even with a minimal percentage of malicious participants (as low as 2%). Notably, the attack's impact is disproportionately higher on specific targeted classes, indicating its potential for targeted disruption while maintaining overall stealth.

Key observations include:

  • Attack Efficacy: The attack’s effectiveness correlates with the proportion of malicious participants, showing significant utility reduction in the global model even with low malicious participant ratios.
  • Impact Longevity: Results suggest that early-round attacks typically do not have a lasting impact, as the model can recover; however, late-round poisonings have enduring effects.
  • Participant Availability: Increasing malicious participant selection rates amplifies attack severity, particularly in later rounds of training.

Defense Mechanism

To counteract these vulnerabilities, the authors propose a detection strategy allowing the FL aggregator to identify malicious updates. This method leverages the distinct characteristics of updates originating from malicious participants. By extracting relevant update subsets and employing PCA for dimensionality reduction, the strategy successfully distinguishes between malicious and benign contributions.

Implications and Future Directions

This work has profound implications for both theoretical and practical deployment of federated systems. It emphasizes the necessity for robust defense mechanisms against adversarial attacks which can stealthily undermine model integrity. Further, it propels inquiries into more sophisticated adversarial strategies and the development of comprehensive defensive measures that go beyond traditional anomaly detection schemes.

Future work could explore extending the proposed defense to resist more complex and adaptive poisoning tactics, including those that evolve with the learning process. Moreover, the generalizability of these findings across different datasets and architectures presents fertile ground for continued research.

By advancing understanding of these adversarial dynamics, the paper makes a valuable contribution to the ongoing discourse on secure federated systems, encouraging further exploration into securing distributed learning paradigms against evolving threats.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Lightbulb On Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

Youtube Logo Streamline Icon: https://streamlinehq.com