Batch InverseVariance Weighting: Deep Heteroscedastic Regression
Abstract
Heteroscedastic regression is the task of supervised learning where each label is subject to noise from a different distribution. This noise can be caused by the labelling process, and impacts negatively the performance of the learning algorithm as it violates the i.i.d. assumptions. In many situations however, the labelling process is able to estimate the variance of such distribution for each label, which can be used as an additional information to mitigate this impact. We adapt an inversevariance weighted mean square error, based on the GaussMarkov theorem, for parameter optimization on neural networks. We introduce Batch InverseVariance, a loss function which is robust to nearground truth samples, and allows to control the effective learning rate. Our experimental results show that BIV improves significantly the performance of the networks on two noisy datasets, compared to L2 loss, inversevariance weighting, as well as a filteringbased baseline.
 Publication:

arXiv eprints
 Pub Date:
 July 2021
 DOI:
 10.48550/arXiv.2107.04497
 arXiv:
 arXiv:2107.04497
 Bibcode:
 2021arXiv210704497M
 Keywords:

 Computer Science  Machine Learning;
 Computer Science  Artificial Intelligence;
 Statistics  Machine Learning
 EPrint:
 Accepted at the Uncertainty in Deep Learning (UDL) workshop at ICML 2021