Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 58 tok/s
Gemini 2.5 Pro 52 tok/s Pro
GPT-5 Medium 12 tok/s Pro
GPT-5 High 17 tok/s Pro
GPT-4o 95 tok/s Pro
Kimi K2 179 tok/s Pro
GPT OSS 120B 463 tok/s Pro
Claude Sonnet 4 38 tok/s Pro
2000 character limit reached

EgoReID: Cross-view Self-Identification and Human Re-identification in Egocentric and Surveillance Videos (1612.08153v1)

Published 24 Dec 2016 in cs.CV and cs.CG

Abstract: Human identification remains to be one of the challenging tasks in computer vision community due to drastic changes in visual features across different viewpoints, lighting conditions, occlusion, etc. Most of the literature has been focused on exploring human re-identification across viewpoints that are not too drastically different in nature. Cameras usually capture oblique or side views of humans, leaving room for a lot of geometric and visual reasoning. Given the recent popularity of egocentric and top-view vision, re-identification across these two drastically different views can now be explored. Having an egocentric and a top view video, our goal is to identify the cameraman in the content of the top-view video, and also re-identify the people visible in the egocentric video, by matching them to the identities present in the top-view video. We propose a CRF-based method to address the two problems. Our experimental results demonstrates the efficiency of the proposed approach over a variety of video recorded from two views.

Citations (3)

Summary

We haven't generated a summary for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Lightbulb On Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.