Emergent Mind

Complementing Semi-Supervised Learning with Uncertainty Quantification

(2207.12131)
Published Jul 22, 2022 in cs.LG and cs.AI

Abstract

The problem of fully supervised classification is that it requires a tremendous amount of annotated data, however, in many datasets a large portion of data is unlabeled. To alleviate this problem semi-supervised learning (SSL) leverages the knowledge of the classifier on the labeled domain and extrapolates it to the unlabeled domain which has a supposedly similar distribution as annotated data. Recent success on SSL methods crucially hinges on thresholded pseudo labeling and thereby consistency regularization for the unlabeled domain. However, the existing methods do not incorporate the uncertainty of the pseudo labels or unlabeled samples in the training process which are due to the noisy labels or out of distribution samples owing to strong augmentations. Inspired by the recent developments in SSL, our goal in this paper is to propose a novel unsupervised uncertainty-aware objective that relies on aleatoric and epistemic uncertainty quantification. Complementing the recent techniques in SSL with the proposed uncertainty-aware loss function our approach outperforms or is on par with the state-of-the-art over standard SSL benchmarks while being computationally lightweight. Our results outperform the state-of-the-art results on complex datasets such as CIFAR-100 and Mini-ImageNet.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.