Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 167 tok/s
Gemini 2.5 Pro 49 tok/s Pro
GPT-5 Medium 36 tok/s Pro
GPT-5 High 42 tok/s Pro
GPT-4o 97 tok/s Pro
Kimi K2 203 tok/s Pro
GPT OSS 120B 442 tok/s Pro
Claude Sonnet 4.5 32 tok/s Pro
2000 character limit reached

DPBalance: Efficient and Fair Privacy Budget Scheduling for Federated Learning as a Service (2402.09715v1)

Published 15 Feb 2024 in cs.DC, cs.CR, and cs.LG

Abstract: Federated learning (FL) has emerged as a prevalent distributed machine learning scheme that enables collaborative model training without aggregating raw data. Cloud service providers further embrace Federated Learning as a Service (FLaaS), allowing data analysts to execute their FL training pipelines over differentially-protected data. Due to the intrinsic properties of differential privacy, the enforced privacy level on data blocks can be viewed as a privacy budget that requires careful scheduling to cater to diverse training pipelines. Existing privacy budget scheduling studies prioritize either efficiency or fairness individually. In this paper, we propose DPBalance, a novel privacy budget scheduling mechanism that jointly optimizes both efficiency and fairness. We first develop a comprehensive utility function incorporating data analyst-level dominant shares and FL-specific performance metrics. A sequential allocation mechanism is then designed using the Lagrange multiplier method and effective greedy heuristics. We theoretically prove that DPBalance satisfies Pareto Efficiency, Sharing Incentive, Envy-Freeness, and Weak Strategy Proofness. We also theoretically prove the existence of a fairness-efficiency tradeoff in privacy budgeting. Extensive experiments demonstrate that DPBalance outperforms state-of-the-art solutions, achieving an average efficiency improvement of $1.44\times \sim 3.49 \times$, and an average fairness improvement of $1.37\times \sim 24.32 \times$.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (35)
  1. European Commission, “General data protection regulation,” May 2018. [Online]. Available: https://gdpr-info.eu/
  2. M. Chen, R. Mathews, T. Ouyang, and F. Beaufays, “Federated learning of out-of-vocabulary words,” arXiv, Apr. 2019.
  3. S. Ramaswamy, R. Mathews, K. Rao, and F. Beaufays, “Federated learning for emoji prediction in a mobile keyboard,” arXiv, Jun. 2019.
  4. T. Yang, G. Andrew, H. Eichner, H. Sun, W. Li, N. Kong, D. Ramage, and F. Beaufays, “Applied federated learning: Improving google keyboard query suggestions,” arXiv, Jan. 2018.
  5. D. Leroy, A. Coucke, T. Lavril, T. Gisselbrecht, and J. Dureau, “Federated learning for keyword spotting,” in Proc. IEEE Int. Conf. Acoust., Speech Signal Process. (ICASSP), May 2019, pp. 6341–6345.
  6. W. De Brouwer, “The federated future is ready for shipping,” Mar. 2019. [Online]. Available: https://medium.com/@_doc_ai/the-federated-future-is-ready-for-shipping-d17ff40f43e3
  7. C. He, S. Li, J. So, M. Zhang, H. Wang, X. Wang, P. Vepakomma, A. Singh, H. Qiu, L. Shen, P. Zhao, Y. Kang, Y. Liu, R. Raskar, Q. Yang, M. Annavaram, and S. Avestimehr, “Fedml: A research library and benchmark for federated machine learning,” arXiv, Jun. 2020.
  8. The FATE Authors, “Federated ai technology enabler,” 2019. [Online]. Available: https://www.fedai.org/
  9. N. Kourtellis, K. Katevas, and D. Perino, “Flaas: Federated learning as a service,” in Proc. Workshop Distrib. Mach. Learn., Dec. 2020, pp. 7–13.
  10. M. Abadi, A. Chu, I. Goodfellow, H. B. McMahan, I. Mironov, K. Talwar, and L. Zhang, “Deep learning with differential privacy,” in Proc. ACM SIGSAC conf. comput. commun. secur., Oct. 2016, pp. 308–318.
  11. K. Wei, J. Li, M. Ding, C. Ma, H. H. Yang, F. Farokhi, S. Jin, T. Q. S. Quek, and H. V. Poor, “Federated learning with differential privacy: Algorithms and performance analysis,” IEEE Trans. Inf. Forensics Secur., vol. 15, no. 1, pp. 3454–3469, Jun. 2020.
  12. L. Sun, J. Qian, and X. Chen, “LDP-FL: practical private aggregation in federated learning with local differential privacy,” in Proc. Int. Joint Conf. Artif. Intell. (IJCAI), 19-27, Aug. 2021, pp. 1571–1578.
  13. A. Ghodsi, M. Zaharia, B. Hindman, A. Konwinski, S. Shenker, and I. Stoica, “Dominant resource fairness: Fair allocation of multiple resource types,” in Proc. USENIX Symp. Netw. Syst. Des. Implementation (NSDI), Mar. 2011.
  14. W. Li, L. Xiang, B. Guo, Z. Li, and X. Wang, “Dplanner: A privacy budgeting system for utility,” IEEE Trans. Inf. Forensics Secur., vol. 18, no. 1, pp. 1196–1210, Feb. 2023.
  15. P. Tholoniat, K. Kostopoulou, M. Chowdhury, A. Cidon, R. Geambasu, M. Lécuyer, and J. Yang, “Packing privacy budget efficiently,” arXiv, Jan. 2022.
  16. J. Yuan, S. Wang, S. Wang, Y. Li, X. Ma, A. Zhou, and M. Xu, “Privacy as a resource in differentially private federated learning,” in Proc. IEEE Int. Conf. Comp. Commun. (INFOCOM), May 2023.
  17. N. Küchler, E. Opel, H. Lycklama, A. Viand, and A. Hithnawi, “Cohere: Privacy management in large scale systems,” arXiv, Jan. 2023.
  18. T. Luo, M. Pan, P. Tholoniat, A. Cidon, R. Geambasu, and M. Lécuyer, “Privacy budget scheduling,” in Proc. USENIX Symp. Oper. Syst. Des. Implementation (OSDI), Aug. 2021, pp. 55–74.
  19. C. T. Dinh, N. H. Tran, M. N. H. Nguyen, C. S. Hong, W. Bao, A. Y. Zomaya, and V. Gramoli, “Federated learning over wireless networks: Convergence analysis and resource allocation,” IEEE/ACM Trans. Netw., vol. 29, no. 1, pp. 398–409, Oct. 2021.
  20. W. Shi, S. Zhou, Z. Niu, M. Jiang, and L. Geng, “Joint device scheduling and resource allocation for latency constrained wireless federated learning,” IEEE Trans. Wirel. Commun., vol. 20, no. 1, pp. 453–467, Dec. 2021.
  21. T. Li, M. Sanjabi, A. Beirami, and V. Smith, “Fair resource allocation in federated learning,” in Proc. Int. Conf. Learn. Representations (ICLR), Apr. 2020, pp. 1–27.
  22. W. Y. B. Lim, J. S. Ng, Z. Xiong, J. Jin, Y. Zhang, D. Niyato, C. Leung, and C. Miao, “Decentralized edge intelligence: A dynamic resource allocation framework for hierarchical federated learning,” IEEE Trans. Parallel Distributed Syst., vol. 33, no. 3, pp. 536–550, Sep. 2022.
  23. A. M. Girgis, D. Data, and S. N. Diggavi, “Renyi differential privacy of the subsampled shuffle model in distributed learning,” in Proc. Adv. Neural Inf. Process. Syst. (NeurIPS), Dec. 2021, pp. 29 181–29 192.
  24. I. Mironov, “Rényi differential privacy,” in Proc. IEEE Comput. Secur. Found. Symp. (CSF), Aug. 2017, pp. 263–275.
  25. M. Uchida and J. Kurose, “An information-theoretic characterization of weighted alpha-proportional fairness,” in Proc. IEEE Int. Conf. Comp. Commun. (INFOCOM), Apr. 2009, pp. 1053–1061.
  26. T. Lan, D. T. H. Kao, M. Chiang, and A. Sabharwal, “An axiomatic theory of fairness in network resource allocation,” in Proc. IEEE Int. Conf. Comp. Commun. (INFOCOM), Mar. 2010, pp. 1343–1351.
  27. C. Joe-Wong, S. Sen, T. Lan, and M. Chiang, “Multiresource allocation: Fairness–efficiency tradeoffs in a unifying framework,” IEEE/ACM Trans. Netw., vol. 21, no. 6, pp. 1785–1798, May 2013.
  28. R. Shokri, M. Stronati, C. Song, and V. Shmatikov, “Membership inference attacks against machine learning models,” in Proc. IEEE Symp. Secur. Privacy (S&P), Jun. 2017, pp. 3–18.
  29. N. Carlini, C. Liu, Ú. Erlingsson, J. Kos, and D. Song, “The secret sharer: Evaluating and testing unintended memorization in neural networks,” in Proc. USENIX Secur. Symp. (USENIX Security), Aug. 2019, pp. 267–284.
  30. F. Lai, X. Zhu, H. V. Madhyastha, and M. Chowdhury, “Oort: Efficient federated learning via guided participant selection,” in Proc. USENIX Symp. Oper. Syst. Des. Imple., (OSDI), Jul. 2021, pp. 19–35.
  31. E. Altman, K. Avrachenkov, and A. Garnaev, “Generalized a-fair resource allocation in wireless networks,” in Proc. Conf. Decis. Control (CDC), Dec. 2008, pp. 2414–2419.
  32. Gurobi Optimization, LLC, “Gurobi optimizer reference manual,” 2022. [Online]. Available: https://www.gurobi.com/documentation/current/refman/index.html
  33. A. Tang, J. Wang, and S. H. Low, “Counter-intuitive throughput behaviors in networks under end-to-end control,” IEEE/ACM Trans. Netw., vol. 14, no. 2, pp. 355–368, Jul. 2006.
  34. A. Rényi, “On measures of entropy and information,” in Proc. Berkeley Symp. Math. Statist. Probability, Volume 1: Contributions to the Theory of Statistics, vol. 4, 1961, pp. 547–562.
  35. C. F. Menezes and D. L. Hanson, “On the theory of risk aversion,” Int. Econ. Rev., vol. 11, no. 3, pp. 481–487, 1970.
Citations (1)

Summary

We haven't generated a summary for this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets

This paper has been mentioned in 2 tweets and received 0 likes.

Upgrade to Pro to view all of the tweets about this paper:

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube