Why So Many Published Sensitivity Analyses Are False. A Systematic Review of Sensitivity Analysis Practices
Abstract
Sensitivity analysis (SA) has much to offer for a very large class of applications, such as model selection, calibration, optimization, quality assurance and many others. Sensitivity analysis offers crucial contextual information regarding a prediction by answering the question "Which uncertain input factors are responsible for the uncertainty in the prediction?" SA is distinct from uncertainty analysis (UA), which instead addresses the question "How uncertain is the prediction?" As we discuss in the present paper much confusion exists in the use of these terms. A proper uncertainty analysis of the output of a mathematical model needs to map what the model does when the input factors are left free to vary over their range of existence. A fortiori, this is true of a sensitivity analysis. Despite this, most UA and SA still explore the input space; moving along monodimensional corridors which leave the space of variation of the input factors mostly unscathed. We use results from a bibliometric analysis to show that many published SA fail the elementary requirement to properly explore the space of the input factors. The results, while disciplinedependent, point to a worrying lack of standards and of recognized good practices. The misuse of sensitivity analysis in mathematical modelling is at least as serious as the misuse of the ptest in statistical modelling. Mature methods have existed for about two decades to produce a defensible sensitivity analysis. We end by offering a rough guide for proper use of the methods.
 Publication:

arXiv eprints
 Pub Date:
 November 2017
 arXiv:
 arXiv:1711.11359
 Bibcode:
 2017arXiv171111359S
 Keywords:

 Statistics  Applications;
 62K99
 EPrint:
 23 pages using double space