Emergent Mind

The Collusion of Memory and Nonlinearity in Stochastic Approximation With Constant Stepsize

(2405.16732)
Published May 27, 2024 in stat.ML , cs.LG , math.OC , math.ST , and stat.TH

Abstract

In this work, we investigate stochastic approximation (SA) with Markovian data and nonlinear updates under constant stepsize $\alpha>0$. Existing work has primarily focused on either i.i.d. data or linear update rules. We take a new perspective and carefully examine the simultaneous presence of Markovian dependency of data and nonlinear update rules, delineating how the interplay between these two structures leads to complications that are not captured by prior techniques. By leveraging the smoothness and recurrence properties of the SA updates, we develop a fine-grained analysis of the correlation between the SA iterates $\thetak$ and Markovian data $xk$. This enables us to overcome the obstacles in existing analysis and establish for the first time the weak convergence of the joint process $(xk, \thetak){k\geq0}$. Furthermore, we present a precise characterization of the asymptotic bias of the SA iterates, given by $\mathbb{E}[\theta\infty]-\theta\ast=\alpha(b\text{m}+b\text{n}+b_\text{c})+O(\alpha{3/2})$. Here, $b\text{m}$ is associated with the Markovian noise, $b\text{n}$ is tied to the nonlinearity, and notably, $b\text{c}$ represents a multiplicative interaction between the Markovian noise and nonlinearity, which is absent in previous works. As a by-product of our analysis, we derive finite-time bounds on higher moment $\mathbb{E}[|\thetak-\theta\ast|{2p}]$ and present non-asymptotic geometric convergence rates for the iterates, along with a Central Limit Theorem.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.