Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 154 tok/s
Gemini 2.5 Pro 43 tok/s Pro
GPT-5 Medium 23 tok/s Pro
GPT-5 High 32 tok/s Pro
GPT-4o 119 tok/s Pro
Kimi K2 175 tok/s Pro
GPT OSS 120B 362 tok/s Pro
Claude Sonnet 4.5 35 tok/s Pro
2000 character limit reached

PAM: Pose Attention Module for Pose-Invariant Face Recognition (2111.11940v1)

Published 23 Nov 2021 in cs.CV

Abstract: Pose variation is one of the key challenges in face recognition. Conventional techniques mainly focus on face frontalization or face augmentation in image space. However, transforming face images in image space is not guaranteed to preserve the lossless identity features of the original image. Moreover, these methods suffer from more computational costs and memory requirements due to the additional models. We argue that it is more desirable to perform feature transformation in hierarchical feature space rather than image space, which can take advantage of different feature levels and benefit from joint learning with representation learning. To this end, we propose a lightweight and easy-to-implement attention block, named Pose Attention Module (PAM), for pose-invariant face recognition. Specifically, PAM performs frontal-profile feature transformation in hierarchical feature space by learning residuals between pose variations with a soft gate mechanism. We validated the effectiveness of PAM block design through extensive ablation studies and verified the performance on several popular benchmarks, including LFW, CFP-FP, AgeDB-30, CPLFW, and CALFW. Experimental results show that our method not only outperforms state-of-the-art methods but also effectively reduces memory requirements by more than 75 times. It is noteworthy that our method is not limited to face recognition with large pose variations. By adjusting the soft gate mechanism of PAM to a specific coefficient, such semantic attention block can easily extend to address other intra-class imbalance problems in face recognition, including large variations in age, illumination, expression, etc.

Citations (4)

Summary

We haven't generated a summary for this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.