Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 71 tok/s
Gemini 2.5 Pro 52 tok/s Pro
GPT-5 Medium 18 tok/s Pro
GPT-5 High 15 tok/s Pro
GPT-4o 101 tok/s Pro
Kimi K2 196 tok/s Pro
GPT OSS 120B 467 tok/s Pro
Claude Sonnet 4 37 tok/s Pro
2000 character limit reached

OpenSlot: Mixed Open-Set Recognition with Object-Centric Learning (2407.02386v2)

Published 2 Jul 2024 in cs.CV

Abstract: Existing open-set recognition (OSR) studies typically assume that each image contains only one class label, with the unknown test set (negative) having a disjoint label space from the known test set (positive), a scenario referred to as full-label shift. This paper introduces the mixed OSR problem, where test images contain multiple class semantics, with both known and unknown classes co-occurring in the negatives, leading to a more complex super-label shift that better reflects real-world scenarios. To tackle this challenge, we propose the OpenSlot framework, based on object-centric learning, which uses slot features to represent diverse class semantics and generate class predictions. The proposed anti-noise slot (ANS) technique helps mitigate the impact of noise (invalid or background) slots during classification training, addressing the semantic misalignment between class predictions and ground truth. We evaluate OpenSlot on both mixed and conventional OSR benchmarks. Without elaborate designs, our method not only excels existing approaches in detecting super-label shifts across OSR tasks, but also achieves state-of-the-art performance on conventional benchmarks. Meanwhile, OpenSlot can localize class objects without using bounding boxes during training, demonstrating competitive performance in open-set object detection and potential for generalization.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (43)
  1. M. Caron, H. Touvron, I. Misra, H. Jégou, J. Mairal, P. Bojanowski, and A. Joulin, “Emerging properties in self-supervised vision transformers,” in IEEE Conf. Comput. Vis. Patt. Recog., Seattle, USA, 2021, pp. 9650–9660.
  2. G. Chen, P. Peng, X. Wang, and Y. Tian, “Adversarial reciprocal points learning for open set recognition,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 44, no. 11, pp. 8065–8081, 2021.
  3. Y. Cui, Z. Yu, W. Peng, Q. Tian, and L. Liu, “Rethinking few-shot class-incremental learning with open-set hypothesis in hyperbolic geometry,” IEEE Trans. on Multimedia, pp. 1–14, 2023.
  4. S. Deng, J.-G. Yu, Z. Wu, H. Gao, Y. Li, and Y. Yang, “Learning relative feature displacement for few-shot open-set recognition,” IEEE Trans. on Multimedia, vol. 25, pp. 5763–5774, 2023.
  5. A. Dittadi, S. Papa, M. De Vita, B. Schölkopf, O. Winther, and F. Locatello, “Generalization and robustness implications in object-centric learning,” 2021. [Online]. Available: https://arxiv.org/abs/2107.00637
  6. A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly et al., “An image is worth 16x16 words: Transformers for image recognition at scale,” 2020. [Online]. Available: https://arxiv.org/abs/2010.11929
  7. X. Du, G. Gozum, Y. Ming, and Y. Li, “Siren: Shaping representations for detecting out-of-distribution objects,” Adv. Neural Inform. Process. Syst., vol. 35, pp. 20 434–20 449, 2022.
  8. X. Du, Z. Wang, M. Cai, and Y. Li, “Vos: Learning what you don’t know by virtual outlier synthesis,” 2022. [Online]. Available: https://arxiv.org/abs/2202.01197
  9. M. Everingham and J. Winn, “The pascal visual object classes challenge 2012 (voc2012) development kit,” Patt. Anal., Statist. Modelling and Computat. Learning, Tech. Rep, vol. 8, 2011.
  10. C. Geng, S.-j. Huang, and S. Chen, “Recent advances in open set recognition: A survey,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 43, no. 10, pp. 3614–3631, 2020.
  11. X. Guo, J. Liu, T. Liu, and Y. Yuan, “Simt: Handling open-set noise for domain adaptive semantic segmentation,” in IEEE Conf. Comput. Vis. Patt. Recog., New Orleans, USA, 2022, pp. 7032–7041.
  12. K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in IEEE Conf. Comput. Vis. Patt. Recog., Las Vegas, USA, 2016, pp. 770–778.
  13. D. Hendrycks and K. Gimpel, “A baseline for detecting misclassified and out-of-distribution examples in neural networks,” 2016. [Online]. Available: https://arxiv.org/abs/1610.02136
  14. C. Huang, Q. Xu, Y. Wang, Y. Wang, and Y. Zhang, “Mood 2020: A public benchmark for out-of-distribution detection and localization on medical images,” IEEE Trans. on Med. Imag., vol. 41, no. 10, pp. 2728–2738, 2022.
  15. C. Huang and Q. e. a. Xu, “Self-supervised masking for unsupervised anomaly detection and localization,” IEEE Trans. on Multimedia, vol. 25, pp. 4426–4438, 2023.
  16. H. Huang, Y. Wang, Q. Hu, and M.-M. Cheng, “Class-specific semantic reconstruction for open set recognition,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 45, no. 4, pp. 4214–4228, 2022.
  17. J. Jiang, F. Deng, G. Singh, and S. Ahn, “Object-centric slot diffusion,” 2023. [Online]. Available: https://arxiv.org/abs/2303.10834
  18. J. Kim, J. Choi, H.-J. Choi, and S. J. Kim, “Shepherding slots to objects: Towards stable and robust object-centric learning,” in IEEE Conf. Comput. Vis. Patt. Recog., Vancouver, Canada, 2023, pp. 19 198–19 207.
  19. S. Kong and D. Ramanan, “Opengan: Open-set recognition via open data generation,” in Int. Conf. Comput. Vis., Virtual/Online, 2021, pp. 813–822.
  20. A. Krizhevsky, G. Hinton et al., “Learning multiple layers of features from tiny images,” 2009.
  21. A. Kuznetsova, H. Rom, N. Alldrin, J. Uijlings, I. Krasin, J. Pont-Tuset, S. Kamali, S. Popov, M. Malloci, A. Kolesnikov et al., “The open images dataset v4: Unified image classification, object detection, and visual relationship detection at scale,” Int. J. Comput. Vis., vol. 128, no. 7, pp. 1956–1981, 2020.
  22. Y. Le and X. Yang, “Tiny imagenet visual recognition challenge,” CS 231N, vol. 7, no. 7, p. 3, 2015.
  23. K. Lee, K. Lee, H. Lee, and J. Shin, “A simple unified framework for detecting out-of-distribution samples and adversarial attacks,” Adv. Neural Inform. Process. Syst., vol. 31, 2018.
  24. S. Liang, Y. Li, and R. Srikant, “Enhancing the reliability of out-of-distribution image detection in neural networks,” 2017. [Online]. Available: https://arxiv.org/abs/1706.02690
  25. T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollár, and C. L. Zitnick, “Microsoft coco: Common objects in context,” in Eur. Conf. Comput. Vis.  Zurich, Switzerland: Springer, 2014, pp. 740–755.
  26. S. Liu, Z. Zeng, T. Ren, F. Li, H. Zhang, J. Yang, C. Li, J. Yang, H. Su, J. Zhu et al., “Grounding dino: Marrying dino with grounded pre-training for open-set object detection,” 2023. [Online]. Available: https://arxiv.org/abs/arXiv:2303.05499
  27. W. Liu, X. Wang, J. Owens, and Y. Li, “Energy-based out-of-distribution detection,” Adv. Neural Inform. Process. Syst., vol. 33, pp. 21 464–21 475, 2020.
  28. F. Locatello, D. Weissenborn, T. Unterthiner, A. Mahendran, G. Heigold, J. Uszkoreit, A. Dosovitskiy, and T. Kipf, “Object-centric learning with slot attention,” Adv. Neural Inform. Process. Syst., vol. 33, pp. 11 525–11 538, 2020.
  29. W. Moon, J. Park, H. S. Seong, C.-H. Cho, and J.-P. Heo, “Difficulty-aware simulator for open set recognition,” in Eur. Conf. Comput. Vis.  Tel Aviv, Israel: Springer, 2022, pp. 365–381.
  30. M. A. Munir, M. H. Khan, M. Sarfraz, and M. Ali, “Towards improving calibration in object detection under domain shift,” Adv. Neural Inform. Process. Syst., vol. 35, pp. 38 706–38 718, 2022.
  31. S. Papa, O. Winther, and A. Dittadi, “Inductive biases for object-centric representations in the presence of complex textures,” 2022. [Online]. Available: https://arxiv.org/abs/2204.08479
  32. S. Ren, K. He, R. Girshick, and J. Sun, “Faster r-cnn: Towards real-time object detection with region proposal networks,” Adv. Neural Inform. Process. Syst., vol. 28, 2015.
  33. M. Seitzer, M. Horn, A. Zadaianchuk, D. Zietlow, T. Xiao, C.-J. Simon-Gabriel, T. He, Z. Zhang, B. Schölkopf, T. Brox et al., “Bridging the gap to real-world object-centric learning,” 2022. [Online]. Available: https://arxiv.org/abs/2209.14860
  34. G. Singh, F. Deng, and S. Ahn, “Illiterate dall-e learns to compose,” 2019. [Online]. Available: https://arxiv.org/abs/2110.11405
  35. G. Singh, Y.-F. Wu, and S. Ahn, “Simple unsupervised object-centric learning for complex and naturalistic videos,” Adv. Neural Inform. Process. Syst., vol. 35, pp. 18 181–18 196, 2022.
  36. Y. Sun, Y. Ming, X. Zhu, and Y. Li, “Out-of-distribution detection with deep nearest neighbors,” in Int. Conf. on Mach. Learn.  Baltimore, USA: PMLR, 2022, pp. 20 827–20 840.
  37. S. Vaze, K. Han, A. Vedaldi, and A. Zisserman, “Open-set recognition: A good closed-set classifier is all you need?” 2021. [Online]. Available: https://arxiv.org/abs/2110.06207
  38. H. Wang, W. Liu, A. Bocchieri, and Y. Li, “Can multi-label classification networks know what they don’t know?” Adv. Neural Inform. Process. Syst., vol. 34, pp. 29 074–29 087, 2021.
  39. J. Wu, J. Wang, Z. Yang, Z. Gan, Z. Liu, J. Yuan, and L. Wang, “Grit: A generative region-to-text transformer for object understanding,” 2022. [Online]. Available: https://arxiv.org/abs/2212.00280
  40. J. Yang, P. Wang, D. Zou, Z. Zhou, K. Ding, W. Peng, H. Wang, G. Chen, B. Li, Y. Sun et al., “Openood: Benchmarking generalized out-of-distribution detection,” Adv. Neural Inform. Process. Syst., vol. 35, pp. 32 598–32 611, 2022.
  41. J. Yang, K. Zhou, Y. Li, and Z. Liu, “Generalized out-of-distribution detection: A survey,” 2021. [Online]. Available: https://arxiv.org/abs/2110.11334
  42. Y. Zhang, J. Hare, and A. Prugel-Bennett, “Deep set prediction networks,” Adv. Neural Inform. Process. Syst., vol. 32, 2019.
  43. Z. Zhao, J. Wang, M. Horn, Y. Ding, T. He, Z. Bai, D. Zietlow, C.-J. Simon-Gabriel, B. Shuai, Z. Tu et al., “Object-centric multiple object tracking,” in Int. Conf. Comput. Vis., Paris, France, 2023, pp. 16 601–16 611.
List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-Up Questions

We haven't generated follow-up questions for this paper yet.

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube