Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 159 tok/s
Gemini 2.5 Pro 41 tok/s Pro
GPT-5 Medium 20 tok/s Pro
GPT-5 High 23 tok/s Pro
GPT-4o 118 tok/s Pro
Kimi K2 193 tok/s Pro
GPT OSS 120B 430 tok/s Pro
Claude Sonnet 4.5 34 tok/s Pro
2000 character limit reached

Client-supervised Federated Learning: Towards One-model-for-all Personalization (2403.19499v1)

Published 28 Mar 2024 in cs.LG

Abstract: Personalized Federated Learning (PerFL) is a new machine learning paradigm that delivers personalized models for diverse clients under federated learning settings. Most PerFL methods require extra learning processes on a client to adapt a globally shared model to the client-specific personalized model using its own local data. However, the model adaptation process in PerFL is still an open challenge in the stage of model deployment and test time. This work tackles the challenge by proposing a novel federated learning framework to learn only one robust global model to achieve competitive performance to those personalized models on unseen/test clients in the FL system. Specifically, we design a new Client-Supervised Federated Learning (FedCS) to unravel clients' bias on instances' latent representations so that the global model can learn both client-specific and client-agnostic knowledge. Experimental study shows that the FedCS can learn a robust FL global model for the changing data distributions of unseen/test clients. The FedCS's global model can be directly deployed to the test clients while achieving comparable performance to other personalized FL methods that require model adaptation.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (19)
  1. “Communication-efficient learning of deep networks from decentralized data,” in Artificial intelligence and statistics. PMLR, 2017, pp. 1273–1282.
  2. “Adaptive personalized federated learning,” arXiv preprint arXiv:2003.13461, 2020.
  3. “Three approaches for personalization with applications to federated learning,” arXiv preprint arXiv:2002.10619, 2020.
  4. “Motley: Benchmarking heterogeneity and personalization in federated learning,” arXiv preprint arXiv:2206.09262, 2022.
  5. “Fine-tuning is fine in federated learning,” arXiv preprint arXiv:2108.07313, 2021.
  6. “Fedavg with fine tuning: Local updates lead to representation learning,” arXiv preprint arXiv:2205.13692, 2022.
  7. “Improving federated learning personalization via model agnostic meta learning,” arXiv preprint arXiv:1909.12488, 2019.
  8. “Personalized federated learning: A meta-learning approach,” arXiv preprint arXiv:2002.07948, 2020.
  9. “Ditto: Fair and robust federated learning through personalization,” in International Conference on Machine Learning. PMLR, 2021, pp. 6357–6368.
  10. “Federated learning with partial model personalization,” arXiv preprint arXiv:2204.03809, 2022.
  11. “Fedbn: Federated learning on non-iid features via local batch normalization,” arXiv preprint arXiv:2102.07623, 2021.
  12. “Exploiting shared representations for personalized federated learning,” arXiv preprint arXiv:2102.07078, 2021.
  13. “Disentangled federated learning for tackling attributes skew via invariant aggregation and diversity transferring,” arXiv preprint arXiv:2206.06818, 2022.
  14. “Upfl: Unsupervised personalized federated learning towards new clients,” 2023.
  15. “Feature distribution matching for federated domain generalization,” 2022.
  16. Pattern classification, John Wiley & Sons, 2006.
  17. “Fast incremental lda feature extraction,” Pattern Recognition, vol. 48, no. 6, pp. 1999–2012, 2015.
  18. “Measuring the effects of non-identical data distribution for federated visual classification,” 2019.
  19. C. Chatterjee and V.P. Roychowdhury, “On self-organizing algorithms and networks for class-separability features,” IEEE Transactions on Neural Networks, vol. 8, no. 3, pp. 663–678, 1997.
Citations (1)

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Open Questions

We haven't generated a list of open questions mentioned in this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (2)

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets

This paper has been mentioned in 1 tweet and received 1 like.

Upgrade to Pro to view all of the tweets about this paper: