Emergent Mind

Optimal Privacy Preserving for Federated Learning in Mobile Edge Computing

(2211.07166)
Published Nov 14, 2022 in cs.LG , cs.CR , and cs.DC

Abstract

Federated Learning (FL) with quantization and deliberately added noise over wireless networks is a promising approach to preserve user differential privacy (DP) while reducing wireless resources. Specifically, an FL process can be fused with quantized Binomial mechanism-based updates contributed by multiple users. However, optimizing quantization parameters, communication resources (e.g., transmit power, bandwidth, and quantization bits), and the added noise to guarantee the DP requirement and performance of the learned FL model remains an open and challenging problem. This article aims to jointly optimize the quantization and Binomial mechanism parameters and communication resources to maximize the convergence rate under the constraints of the wireless network and DP requirement. To that end, we first derive a novel DP budget estimation of the FL with quantization/noise that is tighter than the state-of-the-art bound. We then provide a theoretical bound on the convergence rate. This theoretical bound is decomposed into two components, including the variance of the global gradient and the quadratic bias that can be minimized by optimizing the communication resources, and quantization/noise parameters. The resulting optimization turns out to be a Mixed-Integer Non-linear Programming (MINLP) problem. To tackle it, we first transform this MINLP problem into a new problem whose solutions are proved to be the optimal solutions of the original one. We then propose an approximate algorithm to solve the transformed problem with an arbitrary relative error guarantee. Extensive simulations show that under the same wireless resource constraints and DP protection requirements, the proposed approximate algorithm achieves an accuracy close to the accuracy of the conventional FL without quantization/noise. The results can achieve a higher convergence rate while preserving users' privacy.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.