Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Is Aggregation the Only Choice? Federated Learning via Layer-wise Model Recombination (2305.10730v2)

Published 18 May 2023 in cs.LG

Abstract: Although Federated Learning (FL) enables global model training across clients without compromising their raw data, due to the unevenly distributed data among clients, existing Federated Averaging (FedAvg)-based methods suffer from the problem of low inference performance. Specifically, different data distributions among clients lead to various optimization directions of local models. Aggregating local models usually results in a low-generalized global model, which performs worse on most of the clients. To address the above issue, inspired by the observation from a geometric perspective that a well-generalized solution is located in a flat area rather than a sharp area, we propose a novel and heuristic FL paradigm named FedMR (Federated Model Recombination). The goal of FedMR is to guide the recombined models to be trained towards a flat area. Unlike conventional FedAvg-based methods, in FedMR, the cloud server recombines collected local models by shuffling each layer of them to generate multiple recombined models for local training on clients rather than an aggregated global model. Since the area of the flat area is larger than the sharp area, when local models are located in different areas, recombined models have a higher probability of locating in a flat area. When all recombined models are located in the same flat area, they are optimized towards the same direction. We theoretically analyze the convergence of model recombination. Experimental results show that, compared with state-of-the-art FL methods, FedMR can significantly improve the inference accuracy without exposing the privacy of each client.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Ming Hu (110 papers)
  2. Zhihao Yue (10 papers)
  3. Yihao Huang (51 papers)
  4. Cheng Chen (262 papers)
  5. Xian Wei (48 papers)
  6. Yang Liu (2253 papers)
  7. Mingsong Chen (53 papers)
  8. Xiaofei Xie (104 papers)
  9. Xiang Lian (28 papers)
Citations (4)

Summary

We haven't generated a summary for this paper yet.