Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 27 tok/s
Gemini 2.5 Pro 46 tok/s Pro
GPT-5 Medium 23 tok/s Pro
GPT-5 High 29 tok/s Pro
GPT-4o 70 tok/s Pro
Kimi K2 117 tok/s Pro
GPT OSS 120B 459 tok/s Pro
Claude Sonnet 4 34 tok/s Pro
2000 character limit reached

Zero-shot Active Learning Using Self Supervised Learning (2401.01690v1)

Published 3 Jan 2024 in cs.LG

Abstract: Deep learning algorithms are often said to be data hungry. The performance of such algorithms generally improve as more and more annotated data is fed into the model. While collecting unlabelled data is easier (as they can be scraped easily from the internet), annotating them is a tedious and expensive task. Given a fixed budget available for data annotation, Active Learning helps selecting the best subset of data for annotation, such that the deep learning model when trained over that subset will have maximum generalization performance under this budget. In this work, we aim to propose a new Active Learning approach which is model agnostic as well as one doesn't require an iterative process. We aim to leverage self-supervised learnt features for the task of Active Learning. The benefit of self-supervised learning, is that one can get useful feature representation of the input data, without having any annotation.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (19)
  1. A simple framework for contrastive learning of visual representations. In International conference on machine learning, pages 1597–1607. PMLR, 2020.
  2. Big self-supervised models are strong semi-supervised learners. arXiv preprint arXiv:2006.10029, 2020.
  3. Improved baselines with momentum contrastive learning. arXiv preprint arXiv:2003.04297, 2020.
  4. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pages 248–255. Ieee, 2009.
  5. Improved regularization of convolutional neural networks with cutout. arXiv preprint arXiv:1708.04552, 2017.
  6. Deep bayesian active learning with image data. In International Conference on Machine Learning, pages 1183–1192. PMLR, 2017.
  7. Classification for everyone: Building geography agnostic models for fairer recognition. arXiv preprint arXiv:2312.02957, 2023.
  8. Batchbald: Efficient and diverse batch acquisition for deep bayesian active learning. arXiv preprint arXiv:1906.08158, 2019.
  9. Learning multiple layers of features from tiny images. 2009.
  10. Parallelization of alphabeta pruning algorithm for enhancing the two player games. Int. J. Advances Electronics Comput. Sci, 4:74–81, 2017.
  11. Shop your right size: A system for recommending sizes for fashion products. In Companion Proceedings of The 2019 World Wide Web Conference, WWW 1́9, page 327–334, New York, NY, USA, 2019. Association for Computing Machinery.
  12. Shaping political discourse using multi-source news summarization. arXiv preprint arXiv:2312.11703, 2023.
  13. Exploring graph based approaches for author name disambiguation. arXiv preprint arXiv:2312.08388, 2023.
  14. Multimodal group activity state detection for classroom response system using convolutional neural networks. In Pankaj Kumar Sa, Sambit Bakshi, Ioannis K. Hatzilygeroudis, and Manmath Narayan Sahoo, editors, Recent Findings in Intelligent Computing Techniques, pages 245–251, Singapore, 2019. Springer Singapore.
  15. Active learning for convolutional neural networks: A core-set approach. arXiv preprint arXiv:1708.00489, 2017.
  16. One embedding to do them all. arXiv preprint arXiv:1906.12120, 2019.
  17. Footwear size recommendation system. arXiv preprint arXiv:1806.11423, 2018.
  18. Learning loss for active learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 93–102, 2019.
  19. Wide residual networks. arXiv preprint arXiv:1605.07146, 2016.

Summary

We haven't generated a summary for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-Up Questions

We haven't generated follow-up questions for this paper yet.