Emergent Mind

Cohesive Networks using Delayed Self Reinforcement

(2003.06679)
Published Mar 14, 2020 in eess.SY , cs.MA , and cs.SY

Abstract

How a network gets to the goal (a consensus value) can be as important as reaching the consensus value. While prior methods focus on rapidly getting to a new consensus value, maintaining cohesion, during the transition between consensus values or during tracking, remains challenging and has not been addressed. The main contributions of this work are to address the problem of maintaining cohesion by: (i) proposing a new delayed self-reinforcement (DSR) approach; (ii) extending it for use with agents that have higher-order, heterogeneous dynamics, and (iii) developing stability conditions for the DSR-based method. With DSR, each agent uses current and past information from neighbors to infer the overall goal and modifies the update law to improve cohesion. The advantages of the proposed DSR approach are that it only requires already-available information from a given network to improve the cohesion and does not require network-connectivity modifications (which might not be always feasible) nor increases in the system's overall response speed (which can require larger input). Moreover, illustrative simulation examples are used to comparatively evaluate the performance with and without DSR. The simulation results show substantial improvement in cohesion with DSR.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.