Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 134 tok/s
Gemini 2.5 Pro 41 tok/s Pro
GPT-5 Medium 32 tok/s Pro
GPT-5 High 33 tok/s Pro
GPT-4o 108 tok/s Pro
Kimi K2 207 tok/s Pro
GPT OSS 120B 435 tok/s Pro
Claude Sonnet 4.5 37 tok/s Pro
2000 character limit reached

HIH: Towards More Accurate Face Alignment via Heatmap in Heatmap (2104.03100v2)

Published 7 Apr 2021 in cs.CV

Abstract: Heatmap-based regression overcomes the lack of spatial and contextual information of direct coordinate regression, and has revolutionized the task of face alignment. Yet it suffers from quantization errors caused by neglecting subpixel coordinates in image resizing and network downsampling. In this paper, we first quantitatively analyze the quantization error on benchmarks, which accounts for more than 1/3 of the whole prediction errors for state-of-the-art methods. To tackle this problem, we propose a novel Heatmap In Heatmap(HIH) representation and a coordinate soft-classification (CSC) method, which are seamlessly integrated into the classic hourglass network. The HIH representation utilizes nested heatmaps to jointly represent the coordinate label: one heatmap called integer heatmap stands for the integer coordinate, and the other heatmap named decimal heatmap represents the subpixel coordinate. The range of a decimal heatmap makes up one pixel in the corresponding integer heatmap. Besides, we transfer the offset regression problem to an interval classification task, and CSC regards the confidence of the pixel as the probability of the interval. Meanwhile, CSC applying the distribution loss leverage the soft labels generated from the Gaussian distribution function to guide the offset heatmap training, which makes it easier to learn the distribution of coordinate offsets. Extensive experiments on challenging benchmark datasets demonstrate that our HIH can achieve state-of-the-art results. In particular, our HIH reaches 4.08 NME (Normalized Mean Error) on WFLW, and 3.21 on COFW, which exceeds previous methods by a significant margin.

Citations (16)

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.