Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Convergence Analysis of a Cooperative Diffusion Gauss-Newton Strategy (1811.12806v2)

Published 29 Nov 2018 in math.OC and cs.DC

Abstract: In this paper, we investigate the convergence performance of a cooperative diffusion Gauss-Newton (GN) method, which is widely used to solve the nonlinear least squares problems (NLLS) due to the low computation cost compared with Newton's method. This diffusion GN collects the diversity of temporalspatial information over the network, which is used on local updates. In order to address the challenges on convergence analysis, we firstly consider to form a global recursion relation over spatial and temporal scales since the traditional GN is a time iterative method and the network-wide NLLS need to be solved. Secondly, the derived recursion related to the networkwide deviation between the successive two iterations is ambiguous due to the uncertainty of descent discrepancy in GN update step between two versions of cooperation and non-cooperation. Thus, an important work is to derive the boundedness conditions of this discrepancy. Finally, based on the temporal-spatial recursion relation and the steady-state equilibria theory for discrete dynamical systems, we obtain the sufficient conditions for algorithm convergence, which require the good initial guesses, reasonable step size values and network connectivity. Such analysis provides a guideline for the applications based on this diffusion GN method.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Mou Wu (2 papers)
  2. Naixue Xiong (16 papers)
  3. Liansheng Tan (1 paper)
Citations (2)

Summary

We haven't generated a summary for this paper yet.