Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
139 tokens/sec
GPT-4o
47 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Accelerated Consensus for Multi-Agent Networks through Delayed Self Reinforcement (1812.11536v1)

Published 30 Dec 2018 in cs.SY

Abstract: This article aims to improve the performance of networked multi-agent systems, which are common representations of cyber-physical systems. The rate of convergence to consensus of multi-agent networks is critical to ensure cohesive, rapid response to external stimuli. The challenge is that increasing the rate of convergence can require changes in the network connectivity, which might not be always feasible. Note that current consensus-seeking control laws can be considered as a gradient-based search over the graph's Laplacian potential. The main contribution of this article is to improve the convergence to consensus, by using an accelerated gradient-based search approach. Additionally, this work shows that the accelerated-consensus approach can be implemented in a distributed manner, where each agent applies a delayed self reinforcement, without the need for additional network information or changes to the network connectivity. Simulation results of an example networked system are presented in this work to show that the proposed accelerated-consensus approach with DSR can substantially improve synchronization during the transition by about ten times, in addition to decreasing the transition time by about half, when compared to the case without the DSR approach. This is shown to improve formation control during transitions in networked multi-agent systems.

Citations (7)

Summary

We haven't generated a summary for this paper yet.