Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 99 tok/s
Gemini 2.5 Pro 43 tok/s Pro
GPT-5 Medium 33 tok/s Pro
GPT-5 High 30 tok/s Pro
GPT-4o 110 tok/s Pro
Kimi K2 207 tok/s Pro
GPT OSS 120B 467 tok/s Pro
Claude Sonnet 4 36 tok/s Pro
2000 character limit reached

PaDNet: Pan-Density Crowd Counting (1811.02805v3)

Published 7 Nov 2018 in cs.CV

Abstract: The problem of counting crowds in varying density scenes or in different density regions of the same scene, named as pan-density crowd counting, is highly challenging. Previous methods are designed for single density scenes or do not fully utilize pan-density information. We propose a novel framework, the Pan-Density Network (PaDNet), for pan-density crowd counting. In order to effectively capture pan-density information, PaDNet has a novel module, the Density-Aware Network (DAN), that contains multiple sub-networks pretrained on scenarios with different densities. Further, a module named the Feature Enhancement Layer (FEL) is proposed to aggregate the feature maps learned by DAN. It learns an enhancement rate or a weight for each feature map to boost these feature maps. Further, we propose two refined metrics, Patch MAE (PMAE) and Patch RMSE (PRMSE), for better evaluating the model performance on pan-density scenarios. Extensive experiments on four crowd counting benchmark datasets indicate that PaDNet achieves state-of-the-art recognition performance and high robustness in pan-density crowd counting.

Citations (97)
List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-Up Questions

We haven't generated follow-up questions for this paper yet.