Emergent Mind

Interval Deep Learning for Uncertainty Quantification in Safety Applications

(2105.06438)
Published May 13, 2021 in cs.LG , cs.NA , and math.NA

Abstract

Deep neural networks (DNNs) are becoming more prevalent in important safety-critical applications, where reliability in the prediction is paramount. Despite their exceptional prediction capabilities, current DNNs do not have an implicit mechanism to quantify and propagate significant input data uncertainty -- which is common in safety-critical applications. In many cases, this uncertainty is epistemic and can arise from multiple sources, such as lack of knowledge about the data generating process, imprecision, ignorance, and poor understanding of physics phenomena. Recent approaches have focused on quantifying parameter uncertainty, but approaches to end-to-end training of DNNs with epistemic input data uncertainty are more limited and largely problem-specific. In this work, we present a DNN optimized with gradient-based methods capable to quantify input and parameter uncertainty by means of interval analysis, which we call Deep Interval Neural Network (DINN). We perform experiments on an air pollution dataset with sensor uncertainty and show that the DINN can produce accurate bounded estimates from uncertain input data.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.