Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Convergence of the Expectation-Maximization Algorithm Through Discrete-Time Lyapunov Stability Theory (1810.02022v1)

Published 4 Oct 2018 in math.OC, cs.LG, cs.SY, math.DS, and stat.ML

Abstract: In this paper, we propose a dynamical systems perspective of the Expectation-Maximization (EM) algorithm. More precisely, we can analyze the EM algorithm as a nonlinear state-space dynamical system. The EM algorithm is widely adopted for data clustering and density estimation in statistics, control systems, and machine learning. This algorithm belongs to a large class of iterative algorithms known as proximal point methods. In particular, we re-interpret limit points of the EM algorithm and other local maximizers of the likelihood function it seeks to optimize as equilibria in its dynamical system representation. Furthermore, we propose to assess its convergence as asymptotic stability in the sense of Lyapunov. As a consequence, we proceed by leveraging recent results regarding discrete-time Lyapunov stability theory in order to establish asymptotic stability (and thus, convergence) in the dynamical system representation of the EM algorithm.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Orlando Romero (9 papers)
  2. Sarthak Chatterjee (10 papers)
  3. Sérgio Pequito (23 papers)
Citations (15)

Summary

We haven't generated a summary for this paper yet.