Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
162 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Pairing an arbitrary regressor with an artificial neural network estimating aleatoric uncertainty (1707.07287v3)

Published 23 Jul 2017 in stat.ML and cs.LG

Abstract: We suggest a general approach to quantification of different forms of aleatoric uncertainty in regression tasks performed by artificial neural networks. It is based on the simultaneous training of two neural networks with a joint loss function and a specific hyperparameter $\lambda>0$ that allows for automatically detecting noisy and clean regions in the input space and controlling their {\em relative contribution} to the loss and its gradients. After the model has been trained, one of the networks performs predictions and the other quantifies the uncertainty of these predictions by estimating the locally averaged loss of the first one. Unlike in many classical uncertainty quantification methods, we do not assume any a priori knowledge of the ground truth probability distribution, neither do we, in general, maximize the likelihood of a chosen parametric family of distributions. We analyze the learning process and the influence of clean and noisy regions of the input space on the loss surface, depending on $\lambda$. In particular, we show that small values of $\lambda$ increase the relative contribution of clean regions to the loss and its gradients. This explains why choosing small $\lambda$ allows for better predictions compared with neural networks without uncertainty counterparts and those based on classical likelihood maximization. Finally, we demonstrate that one can naturally form ensembles of pairs of our networks and thus capture both aleatoric and epistemic uncertainty and avoid overfitting.

Citations (10)

Summary

We haven't generated a summary for this paper yet.