Emergent Mind

Communication-Efficient Federated Learning via Optimal Client Sampling

(2007.15197)
Published Jul 30, 2020 in cs.LG and stat.ML

Abstract

Federated learning (FL) ameliorates privacy concerns in settings where a central server coordinates learning from data distributed across many clients. The clients train locally and communicate the models they learn to the server; aggregation of local models requires frequent communication of large amounts of information between the clients and the central server. We propose a novel, simple and efficient way of updating the central model in communication-constrained settings based on collecting models from clients with informative updates and estimating local updates that were not communicated. In particular, modeling the progression of model's weights by an Ornstein-Uhlenbeck process allows us to derive an optimal sampling strategy for selecting a subset of clients with significant weight updates. The central server collects updated local models from only the selected clients and combines them with estimated model updates of the clients that were not selected for communication. We test this policy on a synthetic dataset for logistic regression and two FL benchmarks, namely, a classification task on EMNIST and a realistic language modeling task using the Shakespeare dataset. The results demonstrate that the proposed framework provides significant reduction in communication while maintaining competitive or achieving superior performance compared to a baseline. Our method represents a new line of strategies for communication-efficient FL that is orthogonal to the existing user-local methods such as quantization or sparsification, thus complementing rather than aiming to replace those existing methods.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.