Interpreting a Penalty as the Influence of a Bayesian Prior
Abstract
In machine learning, it is common to optimize the parameters of a probabilistic model, modulated by a somewhat ad hoc regularization term that penalizes some values of the parameters. Regularization terms appear naturally in Variational Inference (VI), a tractable way to approximate Bayesian posteriors: the loss to optimize contains a KullbackLeibler divergence term between the approximate posterior and a Bayesian prior. We fully characterize which regularizers can arise this way, and provide a systematic way to compute the corresponding prior. This viewpoint also provides a prediction for useful values of the regularization factor in neural networks. We apply this framework to regularizers such as L1 or groupLasso.
 Publication:

arXiv eprints
 Pub Date:
 February 2020
 arXiv:
 arXiv:2002.00178
 Bibcode:
 2020arXiv200200178W
 Keywords:

 Computer Science  Machine Learning;
 Mathematics  Statistics Theory;
 Statistics  Machine Learning
 EPrint:
 24 pages, including 2 pages of references and 10 pages of appendix