Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 47 tok/s
Gemini 2.5 Pro 44 tok/s Pro
GPT-5 Medium 13 tok/s Pro
GPT-5 High 12 tok/s Pro
GPT-4o 64 tok/s Pro
Kimi K2 160 tok/s Pro
GPT OSS 120B 452 tok/s Pro
Claude Sonnet 4 36 tok/s Pro
2000 character limit reached

Omni-supervised Facial Expression Recognition via Distilled Data (2005.08551v5)

Published 18 May 2020 in cs.CV

Abstract: Facial expression plays an important role in understanding human emotions. Most recently, deep learning based methods have shown promising for facial expression recognition. However, the performance of the current state-of-the-art facial expression recognition (FER) approaches is directly related to the labeled data for training. To solve this issue, prior works employ the pretrain-and-finetune strategy, i.e., utilize a large amount of unlabeled data to pretrain the network and then finetune it by the labeled data. As the labeled data is in a small amount, the final network performance is still restricted. From a different perspective, we propose to perform omni-supervised learning to directly exploit reliable samples in a large amount of unlabeled data for network training. Particularly, a new dataset is firstly constructed using a primitive model trained on a small number of labeled samples to select samples with high confidence scores from a face dataset, i.e., MS-Celeb-1M, based on feature-wise similarity. We experimentally verify that the new dataset created in such an omni-supervised manner can significantly improve the generalization ability of the learned FER model. However, as the number of training samples grows, computational cost and training time increase dramatically. To tackle this, we propose to apply a dataset distillation strategy to compress the created dataset into several informative class-wise images, significantly improving the training efficiency. We have conducted extensive experiments on widely used benchmarks, where consistent performance gains can be achieved under various settings using the proposed framework. More importantly, the distilled dataset has shown its capabilities of boosting the performance of FER with negligible additional computational costs.

Citations (1)

Summary

We haven't generated a summary for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Lightbulb On Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.