Thompson Sampling with Approximate Inference
Abstract
We study the effects of approximate inference on the performance of Thompson sampling in the $k$armed bandit problems. Thompson sampling is a successful algorithm for online decisionmaking but requires posterior inference, which often must be approximated in practice. We show that even small constant inference error (in $\alpha$divergence) can lead to poor performance (linear regret) due to underexploration (for $\alpha<1$) or overexploration (for $\alpha>0$) by the approximation. While for $\alpha > 0$ this is unavoidable, for $\alpha \leq 0$ the regret can be improved by adding a small amount of forced exploration even when the inference error is a large constant.
 Publication:

arXiv eprints
 Pub Date:
 August 2019
 arXiv:
 arXiv:1908.04970
 Bibcode:
 2019arXiv190804970P
 Keywords:

 Computer Science  Machine Learning;
 Statistics  Machine Learning