AdaPT: An interactive procedure for multiple testing with side information
Abstract
We consider the problem of multiple hypothesis testing with generic side information: for each hypothesis $H_i$ we observe both a pvalue $p_i$ and some predictor $x_i$ encoding contextual information about the hypothesis. For largescale problems, adaptively focusing power on the more promising hypotheses (those more likely to yield discoveries) can lead to much more powerful multiple testing procedures. We propose a general iterative framework for this problem, called the Adaptive pvalue Thresholding (AdaPT) procedure, which adaptively estimates a Bayesoptimal pvalue rejection threshold and controls the false discovery rate (FDR) in finite samples. At each iteration of the procedure, the analyst proposes a rejection threshold and observes partially censored pvalues, estimates the false discovery proportion (FDP) below the threshold, and either stops to reject or proposes another threshold, until the estimated FDP is below $\alpha$. Our procedure is adaptive in an unusually strong sense, permitting the analyst to use any statistical or machine learning method she chooses to estimate the optimal threshold, and to switch between different models at each iteration as information accrues. We demonstrate the favorable performance of AdaPT by comparing it to stateoftheart methods in five real applications and two simulation studies.
 Publication:

arXiv eprints
 Pub Date:
 September 2016
 arXiv:
 arXiv:1609.06035
 Bibcode:
 2016arXiv160906035L
 Keywords:

 Statistics  Methodology
 EPrint:
 Accepted by JRSSB