2000 character limit reached
Online Learning, Stability, and Stochastic Gradient Descent (1105.4701v3)
Published 24 May 2011 in cs.LG
Abstract: In batch learning, stability together with existence and uniqueness of the solution corresponds to well-posedness of Empirical Risk Minimization (ERM) methods; recently, it was proved that CV_loo stability is necessary and sufficient for generalization and consistency of ERM. In this note, we introduce CV_on stability, which plays a similar note in online learning. We show that stochastic gradient descent (SDG) with the usual hypotheses is CVon stable and we then discuss the implications of CV_on stability for convergence of SGD.
Collections
Sign up for free to add this paper to one or more collections.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.