Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 83 tok/s
Gemini 2.5 Pro 42 tok/s Pro
GPT-5 Medium 30 tok/s Pro
GPT-5 High 36 tok/s Pro
GPT-4o 108 tok/s Pro
Kimi K2 220 tok/s Pro
GPT OSS 120B 473 tok/s Pro
Claude Sonnet 4 39 tok/s Pro
2000 character limit reached

Looking GLAMORous: Vehicle Re-Id in Heterogeneous Cameras Networks with Global and Local Attention (2002.02256v1)

Published 6 Feb 2020 in cs.CV

Abstract: Vehicle re-identification (re-id) is a fundamental problem for modern surveillance camera networks. Existing approaches for vehicle re-id utilize global features and local features for re-id by combining multiple subnetworks and losses. In this paper, we propose GLAMOR, or Global and Local Attention MOdules for Re-id. GLAMOR performs global and local feature extraction simultaneously in a unified model to achieve state-of-the-art performance in vehicle re-id across a variety of adversarial conditions and datasets (mAPs 80.34, 76.48, 77.15 on VeRi-776, VRIC, and VeRi-Wild, respectively). GLAMOR introduces several contributions: a better backbone construction method that outperforms recent approaches, group and layer normalization to address conflicting loss targets for re-id, a novel global attention module for global feature extraction, and a novel local attention module for self-guided part-based local feature extraction that does not require supervision. Additionally, GLAMOR is a compact and fast model that is 10x smaller while delivering 25% better performance.

Citations (27)
List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-Up Questions

We haven't generated follow-up questions for this paper yet.