Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 37 tok/s
Gemini 2.5 Pro 41 tok/s Pro
GPT-5 Medium 10 tok/s Pro
GPT-5 High 15 tok/s Pro
GPT-4o 84 tok/s Pro
Kimi K2 198 tok/s Pro
GPT OSS 120B 448 tok/s Pro
Claude Sonnet 4 31 tok/s Pro
2000 character limit reached

Byzantine-Resilient Federated Machine Learning via Over-the-Air Computation (2105.10883v1)

Published 23 May 2021 in cs.IT and math.IT

Abstract: Federated learning (FL) is recognized as a key enabling technology to provide intelligent services for future wireless networks and industrial systems with delay and privacy guarantees. However, the performance of wireless FL can be significantly degraded by Byzantine attack, such as data poisoning attack, model poisoning attack and free-riding attack. To design the Byzantine-resilient FL paradigm in wireless networks with limited radio resources, we propose a novel communication-efficient robust model aggregation scheme via over-the-air computation (AirComp). This is achieved by applying the Weiszfeld algorithm to obtain the smoothed geometric median aggregation against Byzantine attack. The additive structure of the Weiszfeld algorithm is further leveraged to match the signal superposition property of multiple-access channels via AirComp, thereby expediting the communication-efficient secure aggregation process of FL. Numerical results demonstrate the robustness against Byzantine devices and good learning performance of the proposed approach.

Citations (16)

Summary

We haven't generated a summary for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Lightbulb On Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.