Emergent Mind

Distributed Time-Varying Stochastic Optimization and Utility-Based Communication

(1408.5294)
Published Aug 22, 2014 in math.OC , cs.IT , and math.IT

Abstract

We devise a distributed asynchronous stochastic epsilon-gradient-based algorithm to enable a network of computing and communicating nodes to solve a constrained discrete-time time-varying stochastic convex optimization problem. Each node updates its own decision variable only once every discrete time step. Under some assumptions (among which, strong convexity, Lipschitz continuity of the gradient, persistent excitation), we prove the algorithm's asymptotic convergence in expectation to an error bound whose size is related to the constant stepsize choice alpha, the variability in time of the optimization problem, and to the accuracy epsilon. Moreover, the convergence rate is linear. Then, we show how to compute locally stochastic epsilon-gradients that depend also on the time-varying noise probability density function (PDF) of the neighboring nodes, without requiring the neighbors to send such PDFs at each time step. We devise utility-based policies to allow each node to decide whether to send or not the most up-to-date PDF, which guarantee a given user-specified error level epsilon in the computation of the stochastic epsilon-gradient. Numerical simulations display the added value of the proposed approach and its relevance for estimation and control of time-varying processes and networked systems.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.