Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
126 tokens/sec
GPT-4o
47 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Decentralized Prediction-Correction Methods for Networked Time-Varying Convex Optimization (1602.01716v2)

Published 4 Feb 2016 in math.OC, cs.IT, and math.IT

Abstract: We develop algorithms that find and track the optimal solution trajectory of time-varying convex optimization problems which consist of local and network-related objectives. The algorithms are derived from the prediction-correction methodology, which corresponds to a strategy where the time-varying problem is sampled at discrete time instances and then a sequence is generated via alternatively executing predictions on how the optimizers at the next time sample are changing and corrections on how they actually have changed. Prediction is based on how the optimality conditions evolve in time, while correction is based on a gradient or Newton method, leading to Decentralized Prediction-Correction Gradient (DPC-G) and Decentralized Prediction-Correction Newton (DPC-N). We extend these methods to cases where the knowledge on how the optimization programs are changing in time is only approximate and propose Decentralized Approximate Prediction-Correction Gradient (DAPC-G) and Decentralized Approximate Prediction-Correction Newton (DAPC-N). Convergence properties of all the proposed methods are studied and empirical performance is shown on an application of a resource allocation problem in a wireless network. We observe that the proposed methods outperform existing running algorithms by orders of magnitude. The numerical results showcase a trade-off between convergence accuracy, sampling period, and network communications.

Citations (61)

Summary

We haven't generated a summary for this paper yet.