Linear regression through PACBayesian truncation
Abstract
We consider the problem of predicting as well as the best linear combination of d given functions in least squares regression under L^\infty constraints on the linear combination. When the input distribution is known, there already exists an algorithm having an expected excess risk of order d/n, where n is the size of the training data. Without this strong assumption, standard results often contain a multiplicative log(n) factor, complex constants involving the conditioning of the Gram matrix of the covariates, kurtosis coefficients or some geometric quantity characterizing the relation between L^2 and L^\inftyballs and require some additional assumptions like exponential moments of the output. This work provides a PACBayesian shrinkage procedure with a simple excess risk bound of order d/n holding in expectation and in deviations, under various assumptions. The common surprising factor of these results is their simplicity and the absence of exponential moment condition on the output distribution while achieving exponential deviations. The risk bounds are obtained through a PACBayesian analysis on truncated differences of losses. We also show that these results can be generalized to other strongly convex loss functions.
 Publication:

arXiv eprints
 Pub Date:
 October 2010
 arXiv:
 arXiv:1010.0072
 Bibcode:
 2010arXiv1010.0072A
 Keywords:

 Mathematics  Statistics Theory