Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 148 tok/s
Gemini 2.5 Pro 44 tok/s Pro
GPT-5 Medium 27 tok/s Pro
GPT-5 High 38 tok/s Pro
GPT-4o 85 tok/s Pro
Kimi K2 210 tok/s Pro
GPT OSS 120B 442 tok/s Pro
Claude Sonnet 4.5 36 tok/s Pro
2000 character limit reached

Machine learning with limited data (2101.11461v1)

Published 18 Jan 2021 in cs.CV

Abstract: Thanks to the availability of powerful computing resources, big data and deep learning algorithms, we have made great progress on computer vision in the last few years. Computer vision systems begin to surpass humans in some tasks, such as object recognition, object detection, face recognition and pose estimation. Lots of computer vision algorithms have been deployed to real world applications and started to improve our life quality. However, big data and labels are not always available. Sometimes we only have very limited labeled data, such as medical images which requires experts to label them. In this paper, we study few shot image classification, in which we only have very few labeled data. Machine learning with little data is a big challenge. To tackle this challenge, we propose two methods and test their effectiveness thoroughly. One method is to augment image features by mixing the style of these images. The second method is applying spatial attention to explore the relations between patches of images. We also find that domain shift is a critical issue in few shot learning when the training domain and testing domain are different. So we propose a more realistic cross-domain few-shot learning with unlabeled data setting, in which some unlabeled data is available in the target domain. We propose two methods in this setting. Our first method transfers the style information of the unlabeled target dataset to the samples in the source dataset and trains a model with stylized images and original images. Our second method proposes a unified framework to fully utilize all the data. Both of our methods surpass the baseline method by a large margin.

Citations (7)

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Open Questions

We haven't generated a list of open questions mentioned in this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (1)

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.