Bayesian nonparametric cross-study validation of prediction methods
Abstract
We consider comparisons of statistical learning algorithms using multiple data sets, via leave-one-in cross-study validation: each of the algorithms is trained on one data set; the resulting model is then validated on each remaining data set. This poses two statistical challenges that need to be addressed simultaneously. The first is the assessment of study heterogeneity, with the aim of identifying a subset of studies within which algorithm comparisons can be reliably carried out. The second is the comparison of algorithms using the ensemble of data sets. We address both problems by integrating clustering and model comparison. We formulate a Bayesian model for the array of cross-study validation statistics, which defines clusters of studies with similar properties and provides the basis for meaningful algorithm comparison in the presence of study heterogeneity. We illustrate our approach through simulations involving studies with varying severity of systematic errors, and in the context of medical prognosis for patients diagnosed with cancer, using high-throughput measurements of the transcriptional activity of the tumor's genes.
- Publication:
-
arXiv e-prints
- Pub Date:
- June 2015
- DOI:
- 10.48550/arXiv.1506.00474
- arXiv:
- arXiv:1506.00474
- Bibcode:
- 2015arXiv150600474T
- Keywords:
-
- Statistics - Applications
- E-Print:
- Published at http://dx.doi.org/10.1214/14-AOAS798 in the Annals of Applied Statistics (http://www.imstat.org/aoas/) by the Institute of Mathematical Statistics (http://www.imstat.org)