Neural Empirical Bayes
Abstract
We unify $\textit{kernel density estimation}$ and $\textit{empirical Bayes}$ and address a set of problems in unsupervised learning with a geometric interpretation of those methods, rooted in the $\textit{concentration of measure}$ phenomenon. Kernel density is viewed symbolically as $X\rightharpoonup Y$ where the random variable $X$ is smoothed to $Y= X+N(0,\sigma^2 I_d)$, and empirical Bayes is the machinery to denoise in a leastsquares sense, which we express as $X \leftharpoondown Y$. A learning objective is derived by combining these two, symbolically captured by $X \rightleftharpoons Y$. Crucially, instead of using the original nonparametric estimators, we parametrize $\textit{the energy function}$ with a neural network denoted by $\phi$; at optimality, $\nabla \phi \approx \nabla \log f$ where $f$ is the density of $Y$. The optimization problem is abstracted as interactions of highdimensional spheres which emerge due to the concentration of isotropic gaussians. We introduce two algorithmic frameworks based on this machinery: (i) a "walkjump" sampling scheme that combines Langevin MCMC (walks) and empirical Bayes (jumps), and (ii) a probabilistic framework for $\textit{associative memory}$, called NEBULA, defined à la Hopfield by the $\textit{gradient flow}$ of the learned energy to a set of attractors. We finish the paper by reporting the emergence of very rich "creative memories" as attractors of NEBULA for highlyoverlapping spheres.
 Publication:

arXiv eprints
 Pub Date:
 March 2019
 arXiv:
 arXiv:1903.02334
 Bibcode:
 2019arXiv190302334S
 Keywords:

 Statistics  Machine Learning;
 Computer Science  Machine Learning
 EPrint:
 23 pages, 10 figures