Emergent Mind

Distributed Zeroth-Order Stochastic Optimization in Time-varying Networks

(2105.12597)
Published May 26, 2021 in math.OC and cs.DC

Abstract

We consider a distributed convex optimization problem in a network which is time-varying and not always strongly connected. The local cost function of each node is affected by some stochastic process. All nodes of the network collaborate to minimize the average of their local cost functions. The major challenge of our work is that the gradient of cost functions is supposed to be unavailable and has to be estimated only based on the numerical observation of cost functions. Such problem is known as zeroth-order stochastic convex optimization (ZOSCO). In this paper we take a first step towards the distributed optimization problem with a ZOSCO setting. The proposed algorithm contains two basic steps at each iteration: i) each unit updates a local variable according to a random perturbation based single point gradient estimator of its own local cost function; ii) each unit exchange its local variable with its direct neighbors and then perform a weighted average. In the situation where the cost function is smooth and strongly convex, our attainable optimization error is $O(T{-1/2})$ after $T$ iterations. This result is interesting as $O(T{-1/2})$ is the optimal convergence rate in the ZOSCO problem. We have also investigate the optimization error with the general Lipschitz convex function, the result is $O(T{-1/4})$.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.