A Limitation of the PACBayes Framework
Abstract
PACBayes is a useful framework for deriving generalization bounds which was introduced by McAllester ('98). This framework has the flexibility of deriving distribution and algorithmdependent bounds, which are often tighter than VCrelated uniform convergence bounds. In this manuscript we present a limitation for the PACBayes framework. We demonstrate an easy learning task that is not amenable to a PACBayes analysis. Specifically, we consider the task of linear classification in 1D; it is wellknown that this task is learnable using just $O(\log(1/\delta)/\epsilon)$ examples. On the other hand, we show that this fact can not be proved using a PACBayes analysis: for any algorithm that learns 1dimensional linear classifiers there exists a (realizable) distribution for which the PACBayes bound is arbitrarily large.
 Publication:

arXiv eprints
 Pub Date:
 June 2020
 arXiv:
 arXiv:2006.13508
 Bibcode:
 2020arXiv200613508L
 Keywords:

 Computer Science  Machine Learning;
 Statistics  Machine Learning
 EPrint:
 Added references about similar "failures" of the Minmax in the context of PAC learning with bounded mutual information