High-Quality Prediction Intervals for Deep Learning: A Distribution-Free, Ensembled Approach
Abstract
This paper considers the generation of prediction intervals (PIs) by neural networks for quantifying uncertainty in regression tasks. It is axiomatic that high-quality PIs should be as narrow as possible, whilst capturing a specified portion of data. We derive a loss function directly from this axiom that requires no distributional assumption. We show how its form derives from a likelihood principle, that it can be used with gradient descent, and that model uncertainty is accounted for in ensembled form. Benchmark experiments show the method outperforms current state-of-the-art uncertainty quantification methods, reducing average PI width by over 10%.
- Publication:
-
arXiv e-prints
- Pub Date:
- February 2018
- DOI:
- 10.48550/arXiv.1802.07167
- arXiv:
- arXiv:1802.07167
- Bibcode:
- 2018arXiv180207167P
- Keywords:
-
- Statistics - Machine Learning
- E-Print:
- Proceedings of the 35th International Conference on Machine Learning, 2018