Emergent Mind

Abstract

In this paper, we study the relationship between resilience and accuracy in the resilient distributed multi-dimensional consensus problem. We consider a network of agents, each of which has a state in $\mathbb{R}d$. Some agents in the network are adversarial and can change their states arbitrarily. The normal (non-adversarial) agents interact locally and update their states to achieve consensus at some point in the convex hull $\calC$ of their initial states. This objective is achievable if the number of adversaries in the neighborhood of normal agents is less than a specific value, which is a function of the local connectivity and the state dimension $d$. However, to be resilient against adversaries, especially in the case of large $d$, the desired local connectivity is large. We discuss that resilience against adversarial agents can be improved if normal agents are allowed to converge in a bounded region $\calB\supseteq\calC$, which means normal agents converge at some point close to but not necessarily inside $\calC$ in the worst case. The accuracy of resilient consensus can be measured by the Hausdorff distance between $\calB$ and $\calC$. As a result, resilience can be improved at the cost of accuracy. We propose a resilient bounded consensus algorithm that exploits the trade-off between resilience and accuracy by projecting $d$-dimensional states into lower dimensions and then solving instances of resilient consensus in lower dimensions. We analyze the algorithm, present various resilience and accuracy bounds, and also numerically evaluate our results.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.