Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 134 tok/s
Gemini 2.5 Pro 41 tok/s Pro
GPT-5 Medium 41 tok/s Pro
GPT-5 High 39 tok/s Pro
GPT-4o 89 tok/s Pro
Kimi K2 192 tok/s Pro
GPT OSS 120B 437 tok/s Pro
Claude Sonnet 4.5 37 tok/s Pro
2000 character limit reached

Efficient Asynchronous Federated Learning with Sparsification and Quantization (2312.15186v2)

Published 23 Dec 2023 in cs.DC, cs.AI, and cs.LG

Abstract: While data is distributed in multiple edge devices, Federated Learning (FL) is attracting more and more attention to collaboratively train a machine learning model without transferring raw data. FL generally exploits a parameter server and a large number of edge devices during the whole process of the model training, while several devices are selected in each round. However, straggler devices may slow down the training process or even make the system crash during training. Meanwhile, other idle edge devices remain unused. As the bandwidth between the devices and the server is relatively low, the communication of intermediate data becomes a bottleneck. In this paper, we propose Time-Efficient Asynchronous federated learning with Sparsification and Quantization, i.e., TEASQ-Fed. TEASQ-Fed can fully exploit edge devices to asynchronously participate in the training process by actively applying for tasks. We utilize control parameters to choose an appropriate number of parallel edge devices, which simultaneously execute the training tasks. In addition, we introduce a caching mechanism and weighted averaging with respect to model staleness to further improve the accuracy. Furthermore, we propose a sparsification and quantitation approach to compress the intermediate data to accelerate the training. The experimental results reveal that TEASQ-Fed improves the accuracy (up to 16.67% higher) while accelerating the convergence of model training (up to twice faster).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (8)
  1. Rawat W, Wang Z. Deep convolutional neural networks for image classification: A comprehensive review. Neural computation 2017; 29(9): 2352-2449.
  2. Mohammad U, Sorour S. Adaptive task allocation for asynchronous federated mobile edge learning. arXiv preprint arXiv:1905.01656 2019.
  3. Aji AF, Heafield K. Sparse communication for distributed gradient descent. arXiv preprint arXiv:1704.05021 2017.
  4. Chen J, Ran X. Deep learning with edge computing: A review. Proceedings of the IEEE 2019; 107(8): 1655-1674.
  5. Nishio T, Yonetani R. Client selection for federated learning with heterogeneous resources in mobile edge. In: IEEE Int. Conf. on Communications (ICC). IEEE. ; 2019: 1-7.
  6. Su N, Li B. Asynchronous federated unlearning. In: IEEE INFOCOM 2023-IEEE Conference on Computer Communications. IEEE. ; 2023: 1–10.
  7. to appear.
  8. Su N, Li B. How Asynchronous can Federated Learning Be?. In: IEEE/ACM Int. Symposium on Quality of Service (IWQoS). IEEE/ACM. ; 2022: 1-11.
Citations (7)

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.