Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
175 tokens/sec
GPT-4o
8 tokens/sec
Gemini 2.5 Pro Pro
47 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

FewSOME: One-Class Few Shot Anomaly Detection with Siamese Networks (2301.06957v4)

Published 17 Jan 2023 in cs.LG

Abstract: Recent Anomaly Detection techniques have progressed the field considerably but at the cost of increasingly complex training pipelines. Such techniques require large amounts of training data, resulting in computationally expensive algorithms that are unsuitable for settings where only a small amount of normal samples are available for training. We propose 'Few Shot anOMaly detection' (FewSOME), a deep One-Class Anomaly Detection algorithm with the ability to accurately detect anomalies having trained on 'few' examples of the normal class and no examples of the anomalous class. We describe FewSOME to be of low complexity given its low data requirement and short training time. FewSOME is aided by pretrained weights with an architecture based on Siamese Networks. By means of an ablation study, we demonstrate how our proposed loss, 'Stop Loss', improves the robustness of FewSOME. Our experiments demonstrate that FewSOME performs at state-of-the-art level on benchmark datasets MNIST, CIFAR-10, F-MNIST and MVTec AD while training on only 30 normal samples, a minute fraction of the data that existing methods are trained on. Moreover, our experiments show FewSOME to be robust to contaminated datasets. We also report F1 score and balanced accuracy in addition to AUC as a benchmark for future techniques to be compared against. Code available; https://github.com/niamhbelton/FewSOME.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (39)
  1. Latent space autoregression for novelty detection. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 481–490, 2019.
  2. Anomaly detection via few-shot learning on normality. In European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases, 2022.
  3. Zero-shot versus many-shot: Unsupervised texture anomaly detection. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pages 5564–5572, 2023.
  4. Classification-based anomaly detection for general data. International Conference on Learning Representations, 2020.
  5. The mvtec anomaly detection dataset: a comprehensive real-world dataset for unsupervised anomaly detection. International Journal of Computer Vision, 129(4):1038–1059, 2021.
  6. Exploring simple siamese representation learning. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 15750–15758, 2021.
  7. Deep one-class classification via interpolated gaussian descriptor. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, pages 383–392, 2022.
  8. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pages 248–255. Ieee, 2009.
  9. Unsupervised representation learning by predicting image rotations. International Conference on Learning Representations (ICLR), 2018.
  10. Deep anomaly detection using geometric transformations. Advances in neural information processing systems, 31, 2018.
  11. Memorizing normality to detect anomaly: Memory-augmented deep autoencoder for unsupervised anomaly detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 1705–1714, 2019.
  12. Drocc: Deep robust one-class classification. In International conference on machine learning, pages 3711–3721. PMLR, 2020.
  13. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016.
  14. Registration based few-shot anomaly detection. In Computer Vision–ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part XXIV, pages 303–319. Springer, 2022.
  15. Robust kernel density estimation. The Journal of Machine Learning Research, 13(1):2529–2565, 2012.
  16. Adam: A method for stochastic optimization. In International Conference on Learning Representations (ICLR) (Poster), 2015.
  17. Siamese neural networks for one-shot image recognition. In ICML deep learning workshop, volume 2. Lille, 2015.
  18. Learning multiple layers of features from tiny images. 2009.
  19. Mnist handwritten digit database, 2010.
  20. Isolation forest. In 2008 eighth ieee international conference on data mining, pages 413–422. IEEE, 2008.
  21. Exposing outlier exposure: What can be learned from few, one, and zero outlier images. Transactions on Machine Learning Research, 2022.
  22. Deep anomaly detection with deviation networks. In Proceedings of the 25th ACM SIGKDD international conference on knowledge discovery & data mining, pages 353–362, 2019.
  23. Ocgan: One-class novelty detection using gans with constrained latent representations. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 2898–2906, 2019.
  24. Learning transferable visual models from natural language supervision. In International conference on machine learning, pages 8748–8763. PMLR, 2021.
  25. Panda: Adapting pretrained features for anomaly detection and segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 2806–2814, 2021.
  26. Deep one-class classification. In ICML, pages 4393–4402. PMLR, 2018.
  27. Deep semi-supervised anomaly detection. International Conference on Learning Representations (ICLR), 2019.
  28. f-anogan: Fast unsupervised anomaly detection with generative adversarial networks. Medical image analysis, 54:30–44, 2019.
  29. Estimating the support of a high-dimensional distribution. Neural computation, 13(7):1443–1471, 2001.
  30. Facenet: A unified embedding for face recognition and clustering. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 815–823, 2015.
  31. Maeday: Mae for few and zero shot anomaly-detection. 2022.
  32. Identifying and categorizing anomalies in retinal imaging data. 12 2016.
  33. A hierarchical transformation-discriminating generative model for few shot anomaly detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 8495–8504, 2021.
  34. Csi: Novelty detection via contrastive learning on distributionally shifted instances. Advances in neural information processing systems, 33:11839–11852, 2020.
  35. Anomaly detection using siamese network with attention mechanism for few-shot learning. Applied Artificial Intelligence, 36(1):2094885, 2022.
  36. Few-shot anomaly detection for polyp frames from colonoscopy. In Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part VI 23, pages 274–284. Springer, 2020.
  37. Attention guided anomaly localization in images. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XVII, pages 485–503. Springer, 2020.
  38. Anomaly detection via minimum likelihood generative adversarial networks. In 2018 24th International Conference on Pattern Recognition (ICPR), pages 1121–1126. IEEE, 2018.
  39. Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms. 08 2017.
Citations (12)

Summary

We haven't generated a summary for this paper yet.

Github Logo Streamline Icon: https://streamlinehq.com

GitHub