An unbiased estimate for the mean of a {0,1} random variable with relative error distribution independent of the mean
Abstract
Say $X_1,X_2,\ldots$ are independent identically distributed Bernoulli random variables with mean $p$. This paper builds a new estimate $\hat p$ of $p$ that has the property that the relative error, $\hat p /p  1$, of the estimate does not depend in any way on the value of $p$. This allows the construction of exact confidence intervals for $p$ of any desired level without needing any sort of limit or approximation. In addition, $\hat p$ is unbiased. For $\epsilon$ and $\delta$ in $(0,1)$, to obtain an estimate where $\mathbb{P}(\hat p/p  1 > \epsilon) \leq \delta$, the new algorithm takes on average at most $2\epsilon^{2} p^{1}\ln(2\delta^{1})(1  (14/3) \epsilon)^{1}$ samples. It is also shown that any such algorithm that applies whenever $p \leq 1/2$ requires at least $0.2\epsilon^{2} p^{1}\ln((2\delta)\delta^{1})(1 + 2 \epsilon)$ samples. The same algorithm can also be applied to estimate the mean of any random variable that falls in $[0,1]$.
 Publication:

arXiv eprints
 Pub Date:
 September 2013
 arXiv:
 arXiv:1309.5413
 Bibcode:
 2013arXiv1309.5413H
 Keywords:

 Mathematics  Statistics Theory;
 Computer Science  Computational Complexity;
 Mathematics  Probability;
 62F10;
 62F25
 EPrint:
 12 pages