Papers
Topics
Authors
Recent
2000 character limit reached

Simpler PAC-Bayesian Bounds for Hostile Data (1610.07193v2)

Published 23 Oct 2016 in stat.ML, math.ST, and stat.TH

Abstract: PAC-Bayesian learning bounds are of the utmost interest to the learning community. Their role is to connect the generalization ability of an aggregation distribution $\rho$ to its empirical risk and to its Kullback-Leibler divergence with respect to some prior distribution $\pi$. Unfortunately, most of the available bounds typically rely on heavy assumptions such as boundedness and independence of the observations. This paper aims at relaxing these constraints and provides PAC-Bayesian learning bounds that hold for dependent, heavy-tailed observations (hereafter referred to as \emph{hostile data}). In these bounds the Kullack-Leibler divergence is replaced with a general version of Csisz\'ar's $f$-divergence. We prove a general PAC-Bayesian bound, and show how to use it in various hostile settings.

Citations (68)

Summary

We haven't generated a summary for this paper yet.

Whiteboard

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.