On the Complexity of A/B Testing
Abstract
A/B testing refers to the task of determining the best option among two alternatives that yield random outcomes. We provide distributiondependent lower bounds for the performance of A/B testing that improve over the results currently available both in the fixedconfidence (or deltaPAC) and fixedbudget settings. When the distribution of the outcomes are Gaussian, we prove that the complexity of the fixedconfidence and fixedbudget settings are equivalent, and that uniform sampling of both alternatives is optimal only in the case of equal variances. In the common variance case, we also provide a stopping rule that terminates faster than existing fixedconfidence algorithms. In the case of Bernoulli distributions, we show that the complexity of fixedbudget setting is smaller than that of fixedconfidence setting and that uniform sampling of both alternatives though not optimal is advisable in practice when combined with an appropriate stopping criterion.
 Publication:

arXiv eprints
 Pub Date:
 May 2014
 DOI:
 10.48550/arXiv.1405.3224
 arXiv:
 arXiv:1405.3224
 Bibcode:
 2014arXiv1405.3224K
 Keywords:

 Mathematics  Statistics Theory;
 Computer Science  Machine Learning;
 Statistics  Machine Learning
 EPrint:
 Conference on Learning Theory, Jun 2014, Barcelona, Spain. JMLR: Workshop and Conference Proceedings, 35, pp.461481