Emergent Mind

Optimistic Rates for Learning with a Smooth Loss

(1009.3896)
Published Sep 20, 2010 in cs.LG

Abstract

We establish an excess risk bound of O(H Rn2 + Rn \sqrt{H L}) for empirical risk minimization with an H-smooth loss function and a hypothesis class with Rademacher complexity R_n, where L is the best risk achievable by the hypothesis class. For typical hypothesis classes where R_n = \sqrt{R/n}, this translates to a learning rate of O(RH/n) in the separable (L=0) case and O(RH/n + \sqrt{L^ RH/n}) more generally. We also provide similar guarantees for online and stochastic convex optimization with a smooth non-negative objective.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.