Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 37 tok/s
Gemini 2.5 Pro 41 tok/s Pro
GPT-5 Medium 10 tok/s Pro
GPT-5 High 15 tok/s Pro
GPT-4o 84 tok/s Pro
Kimi K2 198 tok/s Pro
GPT OSS 120B 448 tok/s Pro
Claude Sonnet 4 31 tok/s Pro
2000 character limit reached

Communication-Efficient Federated Learning via Optimal Client Sampling (2007.15197v2)

Published 30 Jul 2020 in cs.LG and stat.ML

Abstract: Federated learning (FL) ameliorates privacy concerns in settings where a central server coordinates learning from data distributed across many clients. The clients train locally and communicate the models they learn to the server; aggregation of local models requires frequent communication of large amounts of information between the clients and the central server. We propose a novel, simple and efficient way of updating the central model in communication-constrained settings based on collecting models from clients with informative updates and estimating local updates that were not communicated. In particular, modeling the progression of model's weights by an Ornstein-Uhlenbeck process allows us to derive an optimal sampling strategy for selecting a subset of clients with significant weight updates. The central server collects updated local models from only the selected clients and combines them with estimated model updates of the clients that were not selected for communication. We test this policy on a synthetic dataset for logistic regression and two FL benchmarks, namely, a classification task on EMNIST and a realistic LLMing task using the Shakespeare dataset. The results demonstrate that the proposed framework provides significant reduction in communication while maintaining competitive or achieving superior performance compared to a baseline. Our method represents a new line of strategies for communication-efficient FL that is orthogonal to the existing user-local methods such as quantization or sparsification, thus complementing rather than aiming to replace those existing methods.

Citations (86)

Summary

We haven't generated a summary for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Lightbulb On Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube