Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Reaching optimal distributed estimation through myopic self-confidence adaptation (2207.01384v2)

Published 4 Jul 2022 in math.OC, cs.GT, cs.SI, cs.SY, and eess.SY

Abstract: Consider discrete-time linear distributed averaging dynamics, whereby agents in a network start with uncorrelated and unbiased noisy measurements of a common underlying parameter (state of the world) and iteratively update their estimates following a non-Bayesian rule. Specifically, let every agent update her estimate to a convex combination of her own current estimate and those of her neighbors in the network. As a result of this iterative averaging, each agent obtains an asymptotic estimate of the state of the world, and the variance of this individual estimate depends on the matrix of weights the agents assign to self and to the others. We study a game-theoretic multi-objective optimization problem whereby every agent seeks to choose her self-weight in such a convex combination in a way to minimize the variance of her asymptotic estimate of the state of the unknown parameters. Assuming that the relative influence weights assigned by the agents to their neighbors in the network remain fixed and form an irreducible and aperiodic relative influence matrix, we characterize the Pareto frontier of the problem, as well as the set of Nash equilibria in the resulting game.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Giacomo Como (65 papers)
  2. Fabio Fagnani (52 papers)
  3. Anton V. Proskurnikov (39 papers)
Citations (2)

Summary

We haven't generated a summary for this paper yet.