Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 148 tok/s
Gemini 2.5 Pro 44 tok/s Pro
GPT-5 Medium 23 tok/s Pro
GPT-5 High 30 tok/s Pro
GPT-4o 86 tok/s Pro
Kimi K2 197 tok/s Pro
GPT OSS 120B 458 tok/s Pro
Claude Sonnet 4.5 38 tok/s Pro
2000 character limit reached

A Survey on Federated Unlearning: Challenges, Methods, and Future Directions (2310.20448v4)

Published 31 Oct 2023 in cs.CR

Abstract: In recent years, the notion of ``the right to be forgotten" (RTBF) has become a crucial aspect of data privacy for digital trust and AI safety, requiring the provision of mechanisms that support the removal of personal data of individuals upon their requests. Consequently, machine unlearning (MU) has gained considerable attention which allows an ML model to selectively eliminate identifiable information. Evolving from MU, federated unlearning (FU) has emerged to confront the challenge of data erasure within federated learning (FL) settings, which empowers the FL model to unlearn an FL client or identifiable information pertaining to the client. Nevertheless, the distinctive attributes of federated learning introduce specific challenges for FU techniques. These challenges necessitate a tailored design when developing FU algorithms. While various concepts and numerous federated unlearning schemes exist in this field, the unified workflow and tailored design of FU are not yet well understood. Therefore, this comprehensive survey delves into the techniques and methodologies in FU providing an overview of fundamental concepts and principles, evaluating existing federated unlearning algorithms, and reviewing optimizations tailored to federated learning. Additionally, it discusses practical applications and assesses their limitations. Finally, it outlines promising directions for future research.

Citations (15)

Summary

We haven't generated a summary for this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets

This paper has been mentioned in 1 tweet and received 0 likes.

Upgrade to Pro to view all of the tweets about this paper: