Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
129 tokens/sec
GPT-4o
28 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Topology-Dependent Privacy Bound For Decentralized Federated Learning (2312.07956v1)

Published 13 Dec 2023 in cs.DC

Abstract: Decentralized Federated Learning (FL) has attracted significant attention due to its enhanced robustness and scalability compared to its centralized counterpart. It pivots on peer-to-peer communication rather than depending on a central server for model aggregation. While prior research has delved into various factors of decentralized FL such as aggregation methods and privacy-preserving techniques, one crucial aspect affecting privacy is relatively unexplored: the underlying graph topology. In this paper, we fill the gap by deriving a stringent privacy bound for decentralized FL under the condition that the accuracy is not compromised, highlighting the pivotal role of graph topology. Specifically, we demonstrate that the minimum privacy loss at each model aggregation step is dependent on the size of what we term as 'honest components', the maximally connected subgraphs once all untrustworthy participants are excluded from the networks, which is closely tied to network robustness. Our analysis suggests that attack-resilient networks will provide a superior privacy guarantee. We further validate this by studying both Poisson and power law networks, showing that the latter, being less robust against attacks, indeed reveals more privacy. In addition to a theoretical analysis, we consolidate our findings by examining two distinct privacy attacks: membership inference and gradient inversion.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (35)
  1. “Communication-efficient learning of deep networks from decentralized data,” in Proc. Int. Conf. Artif. Intell. Statist. PMLR, pp. 1273–1282, 2017.
  2. “How to scale distributed deep learning?,” arXiv:1611.04581, 2016.
  3. “Can decentralized algorithms outperform centralized algorithms? a case study for decentralized parallel stochastic gradient descent,” in Proc. Adv. Neural Inf. Process. Syst., vol. 30, 2017.
  4. “D22{}^{2}start_FLOATSUPERSCRIPT 2 end_FLOATSUPERSCRIPT: Decentralized training over decentralized data,” in Proc. Int. Conf. Mach. Learn. PMLR, pp. 4848–4856, 2018.
  5. “Decentralized federated learning: A segmented gossip approach,” arXiv:1908.07782, 2019.
  6. “Privacy-preserving distributed expectation maximization for gaussian mixture model using subspace perturbation,” in Proc. Int. Conf. Acoust., Speech, Signal Process. IEEE, pp. 4263–4267, 2022.
  7. “Membership inference attacks against machine learning models,” in Proc. IEEE Symp. Secur. Privacy, pp. 3–18, 2017.
  8. “Effective passive membership inference attacks in federated learning against overparameterized models,” in Proc. Int. Conf. Learn. Represent., 2023.
  9. “Deep leakage from gradients,” in Proc. Adv. Neural Inf. Process. Syst., vol. 32, 2019.
  10. “Inverting gradients-how easy is it to break privacy in federated learning?,” in Proc. Adv. Neural Inf. Process. Syst., vol. 33, pp. 16937–16947, 2020.
  11. “See through gradients: Image batch recovery via gradinversion,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., pp. 16337–16346, 2021.
  12. “Improved gradient inversion attacks and defenses in federated learning,” IEEE Trans. Big Data, 2023.
  13. “Using highly compressed gradients in federated learning for data reconstruction attacks,” IEEE Trans. Inf. Forensics Secur., vol. 18, pp. 818–830, 2022.
  14. “Towards decentralized deep learning with differential privacy,” in Proc. Int. Conf. Cloud Comput. Springer, pp. 130–145, 2019.
  15. “Relaysum for decentralized deep learning on heterogeneous data,” in Proc. Adv. Neural Inf. Process. Syst., vol. 34, pp. 28004–28015, 2021.
  16. “On the privacy of decentralized machine learning,” arXiv:2205.08443, 2022.
  17. C. Dwork, “Differential privacy,” in Proc. Int. Colloq. Automata, Languages, Program., pp. 1–12, 2006.
  18. “Practical secure aggregation for privacy-preserving machine learning,” in Proc. ACM SIGSAC Conf. Comput. Commun. Secur., pp. 1175–1191, 2017.
  19. “Dynamic differential privacy for ADMM-based distributed classification learning,” IEEE Trans. Inf. Forensics Secur., vol. 12, no. 1, pp. 172–187, 2016.
  20. “Improving the privacy and accuracy of ADMM-based distributed algorithms,” in Proc. Int. Conf. Mach. Lear., pp. 5796–5805, 2018.
  21. “Recycled ADMM: Improve privacy and accuracy with less computation in distributed algorithms,” in Proc. 56th Annu. Allerton Conf. Commun., Control, Comput., pp. 959–965, 2018.
  22. “DP-ADMM: ADMM-based distributed learning with differential privacy,” IEEE Trans. Inf. Forensics Secur., vol. 15, pp. 1002–1012, 2019.
  23. “Privacy-preserving distributed admm with event-triggered communication,” IEEE Trans. Neural Netw. Learn, Syst., 2022.
  24. “Gossip algorithms for distributed signal processing,” Proc. IEEE, vol. 98, no. 11, pp. 1847–1864, 2010.
  25. “Convergence speed in distributed consensus and averaging,” SIAM J. Control Optim., vol. 48, no. 1, pp. 33–55, 2009.
  26. “Distributed optimization and statistical learning via the alternating direction method of multipliers,” Found. Trends in Mach. Learn., vol. 3, no. 1, pp. 1–122, 2011.
  27. “Distributed optimization using the primal-dual method of multipliers,” IEEE Trans. Signal Process., vol. 4, no. 1, pp. 173–187, 2018.
  28. T. M. Cover and J. A. Tomas, Elements of information theory, John Wiley & Sons, 2012.
  29. “Practical private information aggregation in large networks,” in Inf. Secur. Technolo. Appl. Springer, pp. 89–103, 2012.
  30. A. Beimel, “On private computation in incomplete networks,” Distrib. Comput., vol. 19, no. 3, pp. 237–252, 2007.
  31. “Privacy-preserving distributed average consensus based on additive secret sharing,” in Proc. Eur. Signal Process. Conf., pp. 1–5, 2019.
  32. Q. Li, R. Heusdens and M. G. Christensen, “Privacy-preserving distributed optimization via subspace perturbation: A general framework,” in IEEE Trans. Signal Process., vol. 68, pp. 5983 - 5996, 2020.
  33. “Privacy-preserving distributed processing: Metrics, bounds, and algorithms,” IEEE Trans. Inf. Forensics Secur., vol. 16, pp. 2090–2103, 2021.
  34. G. Ver Steeg, “Non-parametric entropy estimation toolbox (npeet),” https://github.com/gregversteeg/NPEET, 2000.
  35. “Impact of random failures and attacks on poisson and power-law random networks,” ACM Comput. Surv., 2011.
Citations (1)

Summary

We haven't generated a summary for this paper yet.