Emergent Mind

Event-Driven Receding Horizon Control for Distributed Estimation in Network Systems

(2009.11958)
Published Sep 24, 2020 in eess.SY , cs.SY , and math.OC

Abstract

We consider the problem of estimating the states of a distributed network of nodes (targets) through a team of cooperating agents (sensors) persistently visiting the nodes so that an overall measure of estimation error covariance evaluated over a finite period is minimized. We formulate this as a multi-agent persistent monitoring problem where the goal is to control each agent's trajectory defined as a sequence of target visits and the corresponding dwell times spent making observations at each visited target. A distributed on-line agent controller is developed where each agent solves a sequence of receding horizon control problems (RHCPs) in an event-driven manner. A novel objective function is proposed for these RHCPs so as to optimize the effectiveness of this distributed estimation process and its unimodality property is established under some assumptions. Moreover, a machine learning solution is proposed to improve the computational efficiency of this distributed estimation process by exploiting the history of each agent's trajectory. Finally, extensive numerical results are provided indicating significant improvements compared to other state-of-the-art agent controllers.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.