Convergence Rates of Biased Stochastic Optimization for Learning Sparse Ising Models
Abstract
We study the convergence rate of stochastic optimization of exact (NP-hard) objectives, for which only biased estimates of the gradient are available. We motivate this problem in the context of learning the structure and parameters of Ising models. We first provide a convergence-rate analysis of deterministic errors for forward-backward splitting (FBS). We then extend our analysis to biased stochastic errors, by first characterizing a family of samplers and providing a high probability bound that allows understanding not only FBS, but also proximal gradient (PG) methods. We derive some interesting conclusions: FBS requires only a logarithmically increasing number of random samples in order to converge (although at a very low rate); the required number of random samples is the same for the deterministic and the biased stochastic setting for FBS and basic PG; accelerated PG is not guaranteed to converge in the biased stochastic setting.
- Publication:
-
arXiv e-prints
- Pub Date:
- June 2012
- DOI:
- 10.48550/arXiv.1206.4627
- arXiv:
- arXiv:1206.4627
- Bibcode:
- 2012arXiv1206.4627H
- Keywords:
-
- Computer Science - Machine Learning;
- Statistics - Machine Learning
- E-Print:
- ICML2012