Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 170 tok/s
Gemini 2.5 Pro 46 tok/s Pro
GPT-5 Medium 33 tok/s Pro
GPT-5 High 31 tok/s Pro
GPT-4o 80 tok/s Pro
Kimi K2 191 tok/s Pro
GPT OSS 120B 432 tok/s Pro
Claude Sonnet 4.5 37 tok/s Pro
2000 character limit reached

Learning From An Optimization Viewpoint (1204.4145v1)

Published 18 Apr 2012 in cs.LG and cs.GT

Abstract: In this dissertation we study statistical and online learning problems from an optimization viewpoint.The dissertation is divided into two parts : I. We first consider the question of learnability for statistical learning problems in the general learning setting. The question of learnability is well studied and fully characterized for binary classification and for real valued supervised learning problems using the theory of uniform convergence. However we show that for the general learning setting uniform convergence theory fails to characterize learnability. To fill this void we use stability of learning algorithms to fully characterize statistical learnability in the general setting. Next we consider the problem of online learning. Unlike the statistical learning framework there is a dearth of generic tools that can be used to establish learnability and rates for online learning problems in general. We provide online analogs to classical tools from statistical learning theory like Rademacher complexity, covering numbers, etc. We further use these tools to fully characterize learnability for online supervised learning problems. II. In the second part, for general classes of convex learning problems, we provide appropriate mirror descent (MD) updates for online and statistical learning of these problems. Further, we show that the the MD is near optimal for online convex learning and for most cases, is also near optimal for statistical convex learning. We next consider the problem of convex optimization and show that oracle complexity can be lower bounded by the so called fat-shattering dimension of the associated linear class. Thus we establish a strong connection between offline convex optimization problems and statistical learning problems. We also show that for a large class of high dimensional optimization problems, MD is in fact near optimal even for convex optimization.

Citations (24)

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (1)

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.