Emergent Mind
Functional Central Limit Theorem and Strong Law of Large Numbers for Stochastic Gradient Langevin Dynamics
(2210.02092)
Published Oct 5, 2022
in
math.PR
,
cs.LG
,
and
math.OC
Abstract
We study the mixing properties of an important optimization algorithm of machine learning: the stochastic gradient Langevin dynamics (SGLD) with a fixed step size. The data stream is not assumed to be independent hence the SGLD is not a Markov chain, merely a \emph{Markov chain in a random environment}, which complicates the mathematical treatment considerably. We derive a strong law of large numbers and a functional central limit theorem for SGLD.
We're not able to analyze this paper right now due to high demand.
Please check back later (sorry!).
Generate a summary of this paper on our Pro plan:
We ran into a problem analyzing this paper.