Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 60 tok/s
Gemini 2.5 Pro 51 tok/s Pro
GPT-5 Medium 18 tok/s Pro
GPT-5 High 14 tok/s Pro
GPT-4o 77 tok/s Pro
Kimi K2 159 tok/s Pro
GPT OSS 120B 456 tok/s Pro
Claude Sonnet 4 38 tok/s Pro
2000 character limit reached

Distributed Zeroth-Order Stochastic Optimization in Time-varying Networks (2105.12597v1)

Published 26 May 2021 in math.OC and cs.DC

Abstract: We consider a distributed convex optimization problem in a network which is time-varying and not always strongly connected. The local cost function of each node is affected by some stochastic process. All nodes of the network collaborate to minimize the average of their local cost functions. The major challenge of our work is that the gradient of cost functions is supposed to be unavailable and has to be estimated only based on the numerical observation of cost functions. Such problem is known as zeroth-order stochastic convex optimization (ZOSCO). In this paper we take a first step towards the distributed optimization problem with a ZOSCO setting. The proposed algorithm contains two basic steps at each iteration: i) each unit updates a local variable according to a random perturbation based single point gradient estimator of its own local cost function; ii) each unit exchange its local variable with its direct neighbors and then perform a weighted average. In the situation where the cost function is smooth and strongly convex, our attainable optimization error is $O(T{-1/2})$ after $T$ iterations. This result is interesting as $O(T{-1/2})$ is the optimal convergence rate in the ZOSCO problem. We have also investigate the optimization error with the general Lipschitz convex function, the result is $O(T{-1/4})$.

Citations (2)

Summary

We haven't generated a summary for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Lightbulb On Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.