Unsupervised Out-of-Distribution Detection by Restoring Lossy Inputs with Variational Autoencoder (2309.02084v3)
Abstract: Deep generative models have been demonstrated as problematic in the unsupervised out-of-distribution (OOD) detection task, where they tend to assign higher likelihoods to OOD samples. Previous studies on this issue are usually not applicable to the Variational Autoencoder (VAE). As a popular subclass of generative models, the VAE can be effective with a relatively smaller model size and be more stable and faster in training and inference, which can be more advantageous in real-world applications. In this paper, We propose a novel VAE-based score called Error Reduction (ER) for OOD detection, which is based on a VAE that takes a lossy version of the training set as inputs and the original set as targets. Experiments are carried out on various datasets to show the effectiveness of our method, we also present the effect of design choices with ablation experiments. Our code is available at: https://github.com/ZJLAB-AMMI/VAE4OOD.
- “A baseline for detecting misclassified and out-of-distribution examples in neural networks,” in Proceedings of the International Conference on Learning Representations, 2016.
- “Simple and scalable predictive uncertainty estimation using deep ensembles,” in Advances in Neural Information Processing Systems, 2017, vol. 30.
- “Enhancing the reliability of out-of-distribution image detection in neural networks,” in Proceedings of the International Conference on Learning Representations, 2018.
- “Generalized ODIN: Detecting out-of-distribution image without learning from out-of-distribution data,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 10951–10960.
- “React: Out-of-distribution detection with rectified activations,” in Advances in Neural Information Processing Systems, 2021, vol. 34, pp. 144–157.
- “Auto-encoding variational bayes,” in Proceedings of the International Conference on Learning Representations., 2013.
- “Pixel recurrent neural networks,” in International Conference on Machine Learning. PMLR, 2016, pp. 1747–1756.
- “Pixelcnn++: Improving the pixelcnn with discretized logistic mixture likelihood and other modifications,” in Proceedings of the International Conference on Learning Representations, 2017.
- “Glow: Generative flow with invertible 1x1 convolutions,” in Advances in Neural Information Processing Systems, 2018, vol. 31.
- “Biva: A very deep hierarchy of latent variables for generative modeling,” in Advances in Neural Information Processing Systems, 2019, vol. 32.
- Christopher M Bishop, “Novelty detection and neural network validation,” IEE Proceedings-Vision, Image and Signal processing, vol. 141, no. 4, pp. 217–222, 1994.
- “Do deep generative models know what they don’t know?,” in Proceedings of the International Conference on Learning Representations, 2019.
- “Likelihood ratios for out-of-distribution detection,” in Advances in Neural Information Processing Systems, 2019, vol. 32.
- “Input complexity and out-of-distribution detection with likelihood-based generative models,” in Proceedings of the International Conference on Learning Representations, 2020.
- “Likelihood regret: An out-of-distribution detection score for variational auto-encoder,” in Advances in Neural Information Processing Systems, 2020, pp. 20685–20696.
- “Understanding anomaly detection with deep invertible networks through hierarchies of distributions and features,” in Advances in Neural Information Processing Systems, 2020, pp. 21038–21049.
- “Hierarchical VAEs know what they don’t know,” in Proceedings of the International Conference on Learning Representations. PMLR, 2021, pp. 4117–4128.
- “Understanding failures in out-of-distribution detection with deep generative models,” in International Conference on Machine Learning. PMLR, 2021, pp. 12427–12436.
- “Out-of-distribution detection with an adaptive likelihood ratio on informative hierarchical vae,” in Advances in Neural Information Processing Systems, 2022, pp. 7383–7396.
- “The tilted variational autoencoder: Improving out-of-distribution detection,” in Proceedings of the International Conference on Learning Representations, 2023.
- “Stochastic backpropagation and approximate inference in deep generative models,” in International Conference on Machine Learning, 2014.
- “Your classifier is secretly an energy based model and you should treat it like one,” in Proceedings of the International Conference on Learning Representations, 2020.
- Mu Cai and Yixuan Li, “Out-of-distribution detection via frequency-regularized generative models,” in Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 2023, pp. 5521–5530.
- “Improving reconstruction autoencoder out-of-distribution detection with mahalanobis distance,” arXiv preprint arXiv:1812.02765, 2018.
- “Free energy coding,” in Proc. of Data Compression Conference. IEEE, 1996, pp. 73–81.
- “Variational lossy autoencoder,” in Proceedings of the International Conference on Learning Representations, 2017.
- “Generating sentences from a continuous space,” in Proceedings of the SIGNLL Conference on Computational Natural Language Learning, CoNLL 2016. Association for Computational Linguistics (ACL), 2016, pp. 10–21.
- “Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms,” arXiv preprint arXiv:1708.07747, 2017.
- Yann LeCun, “The mnist database of handwritten digits,” http://yann.lecun.com/exdb/mnist/, 1998.
- Andrei N Kolmogorov, “On tables of random numbers,” Sankhyā: The Indian Journal of Statistics, Series A, pp. 369–376, 1963.
- “Learning multiple layers of features from tiny images,” 2009.
- “Reading digits in natural images with unsupervised feature learning,” 2011.
- “Deep learning for classical japanese literature,” arXiv preprint arXiv:1812.01718, 2018.
- “Deep learning face attributes in the wild,” in Proceedings of the International Conference on Computer Vision, 2015, pp. 3730–3738.
- “Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop,” arXiv preprint arXiv:1506.03365, 2015.
- “iNaturalist 2021 competition dataset.,” https://github.com/visipedia/inat_comp/tree/master/2021, 2021.
- “Places: A 10 million image database for scene recognition,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017.
- “Sun database: Large-scale scene recognition from abbey to zoo,” in 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2010, pp. 3485–3492.
- “Describing textures in the wild,” in Proceedings of the IEEE Conf. on Computer Vision and Pattern Recognition, 2014, pp. 3606–3613.
- “Adam: A method for stochastic optimization,” Proceedings of the International Conference on Learning Representations, 2015.
- “The unreasonable effectiveness of deep features as a perceptual metric,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2018, pp. 586–595.
- “Denoising diffusion implicit models,” in Proceedings of the International Conference on Learning Representations, 2021.