Emergent Mind

Abstract

This work considers resilient, cooperative state estimation in unreliable multi-agent networks. A network of agents aims to collaboratively estimate the value of an unknown vector parameter, while an {\em unknown} subset of agents suffer Byzantine faults. Faulty agents malfunction arbitrarily and may send out {\em highly unstructured} messages to other agents in the network. As opposed to fault-free networks, reaching agreement in the presence of Byzantine faults is far from trivial. In this paper, we propose a computationally-efficient algorithm that is provably robust to Byzantine faults. At each iteration of the algorithm, a good agent (1) performs a gradient descent update based on noisy local measurements, (2) exchanges its update with other agents in its neighborhood, and (3) robustly aggregates the received messages using coordinate-wise trimmed means. Under mild technical assumptions, we establish that good agents learn the true parameter asymptotically in almost sure sense. We further complement our analysis by proving (high probability) {\em finite-time} convergence rate, encapsulating network characteristics.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.