Emergent Mind

Negative Sampling in Variational Autoencoders

(1910.02760)
Published Oct 7, 2019 in cs.LG and stat.ML

Abstract

Modern deep artificial neural networks have achieved great success in the domain of computer vision and beyond. However, their application to many real-world tasks is undermined by certain limitations, such as overconfident uncertainty estimates on out-of-distribution data or performance deterioration under data distribution shifts. Several types of deep learning models used for density estimation through probabilistic generative modeling have been shown to fail to detect out-of-distribution samples by assigning higher likelihoods to anomalous data. We investigate this failure mode in Variational Autoencoder models, which are also prone to this, and improve upon the out-of-distribution generalization performance of the model by employing an alternative training scheme utilizing negative samples. We present a fully unsupervised version: when the model is trained in an adversarial manner, the generator's own outputs can be used as negative samples. We demonstrate empirically the effectiveness of the approach in reducing the overconfident likelihood estimates of out-of-distribution inputs on image data.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.