Stochastic Gradient Variational Bayes in the Stochastic Blockmodel
Abstract
Stochastic variational Bayes algorithms have become very popular in the machine learning literature, particularly in the context of nonparametric Bayesian inference. These algorithms replace the true but intractable posterior distribution with the best (in the sense of Kullback-Leibler divergence) member of a tractable family of distributions, using stochastic gradient algorithms to perform the optimization step. stochastic variational Bayes inference implicitly trades off computational speed for accuracy, but the loss of accuracy is highly model (and even dataset) specific. In this paper we carry out an empirical evaluation of this trade off in the context of stochastic blockmodels, which are a widely used class of probabilistic models for network and relational data. Our experiments indicate that, in the context of stochastic blockmodels, relatively large subsamples are required for these algorithms to find accurate approximations of the posterior, and that even then the quality of the approximations provided by stochastic gradient variational algorithms can be highly variable.
- Publication:
-
arXiv e-prints
- Pub Date:
- October 2024
- DOI:
- 10.48550/arXiv.2410.02649
- arXiv:
- arXiv:2410.02649
- Bibcode:
- 2024arXiv241002649R
- Keywords:
-
- Statistics - Methodology;
- Statistics - Applications
- E-Print:
- 33 pages, 16 figures