Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
162 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Frame-level Prediction of Facial Expressions, Valence, Arousal and Action Units for Mobile Devices (2203.13436v2)

Published 25 Mar 2022 in cs.CV

Abstract: In this paper, we consider the problem of real-time video-based facial emotion analytics, namely, facial expression recognition, prediction of valence and arousal and detection of action unit points. We propose the novel frame-level emotion recognition algorithm by extracting facial features with the single EfficientNet model pre-trained on AffectNet. As a result, our approach may be implemented even for video analytics on mobile devices. Experimental results for the large scale Aff-Wild2 database from the third Affective Behavior Analysis in-the-wild (ABAW) Competition demonstrate that our simple model is significantly better when compared to the VggFace baseline. In particular, our method is characterized by 0.15-0.2 higher performance measures for validation sets in uni-task Expression Classification, Valence-Arousal Estimation and Expression Classification. Due to simplicity, our approach may be considered as a new baseline for all four sub-challenges.

Citations (27)

Summary

  • The paper presents an EfficientNet-based approach that surpasses VGGFace baselines, achieving up to a 0.38 F1-score for expression recognition and a mean CCC of 0.46 for valence-arousal prediction.
  • The model employs a multi-task learning framework with shallow neural networks to process each video frame for emotion, valence, and action unit detection.
  • Its lightweight design and smoothing techniques enable real-time analytics on mobile devices, promising enhanced user interaction in affective computing applications.

Frame-level Prediction of Facial Expressions, Valence, Arousal and Action Units for Mobile Devices

In the field of affective computing, the accurate and efficient analysis of human emotions via facial cues is pivotal, particularly for enhancing man-machine interaction across various domains such as education and health. The paper, "Frame-level Prediction of Facial Expressions, Valence, Arousal and Action Units for Mobile Devices," offers a refined approach to facial emotion recognition on a frame-by-frame basis, which is suitable even for implementation on resource-constrained platforms like mobile devices.

The authors propose a method utilizing the EfficientNet architecture—pre-trained on the AffectNet dataset—to extract relevant facial features. This approach emphasizes simplicity and high performance, distinguishing itself from prior complex ensemble models that often lack real-time applicability on mobile devices.

Core Contributions and Methodology

The proposed model addresses four sub-challenges of the third Affective Behavior Analysis in-the-wild (ABAW) Competition: expression recognition, valence-arousal estimation, and action unit detection. The authors present a straightforward multi-task learning model that processes video frames independently. The key components of their methodology include:

  1. Efficient Facial Feature Extraction: Utilizing the EfficientNet architecture, the model efficiently captures facial embeddings and emotional scores, leveraging pre-training on substantial datasets like VGGFace2 and AffectNet.
  2. Classification and Regression Models: The extracted features serve as inputs to shallow neural networks or multi-layer perceptrons (MLPs), enabling classification and regression across the defined tasks.
  3. Performance Optimization: Through experimental validation, the method is fine-tuned to deliver superior accuracy compared to the VGGFace baseline provided in the ABAW competition, showcasing improvements of 0.15-0.2 in performance metrics across different tasks.
  4. Frame-Level Analysis for Real-Time Application: The lightweight nature of this solution permits implementation in real-time scenarios, as demonstrated in an Android application for on-device emotion recognition.

Experimental Results and Findings

The model demonstrates notable advancements in expression recognition, valence-arousal estimation, and action unit detection, with a particular emphasis on utilizing both embeddings and scores for enhanced accuracy. The experimental outcomes reflect a marked improvement over baseline models, positioning the methodology as a potential new standard for baseline comparisons in future affective computing challenges.

  1. Enhancements over Baseline Models: The EfficientNet-based approach consistently outperforms previous models, yielding advancements such as a 0.38 F1-score for expression recognition, a mean CCC of 0.46 for valence-arousal prediction, and a 0.54 F1-score for action units.
  2. Optimal Model Configuration: The EfficientNet-B0 emerges as the best-performing model configuration, striking a balance between computational efficiency and predictive performance.
  3. Smoothing Techniques: The application of smoothing techniques, such as mean and median filters, further refines the model's performance by enhancing temporal prediction consistency across video sequences.

Implications and Future Directions

The practical implications of this research are substantial, given the model’s suitability for integration into mobile platforms, which could significantly enhance user interaction capabilities in personal and professional contexts. The potential to generalize this approach to various real-world settings without dataset-specific tuning presents a major advancement in affective computing.

Looking forward, promising areas for future development include the integration of this frame-level approach with sequential models or attention mechanisms to leverage temporal dependencies within video data, thus enhancing robustness and accuracy. Additionally, there is value in exploring ensemble techniques that combine the proposed model with other video representation strategies for even greater effectiveness.

The research outlines a clear path for advancing the field of emotion recognition, setting a benchmark for simplicity, efficiency, and performance in real-time, in-the-wild applications.

Github Logo Streamline Icon: https://streamlinehq.com