Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 60 tok/s
Gemini 2.5 Pro 51 tok/s Pro
GPT-5 Medium 18 tok/s Pro
GPT-5 High 14 tok/s Pro
GPT-4o 77 tok/s Pro
Kimi K2 159 tok/s Pro
GPT OSS 120B 456 tok/s Pro
Claude Sonnet 4 38 tok/s Pro
2000 character limit reached

Augmented Robust PCA For Foreground-Background Separation on Noisy, Moving Camera Video (1709.09328v1)

Published 27 Sep 2017 in stat.ML and cs.CV

Abstract: This work presents a novel approach for robust PCA with total variation regularization for foreground-background separation and denoising on noisy, moving camera video. Our proposed algorithm registers the raw (possibly corrupted) frames of a video and then jointly processes the registered frames to produce a decomposition of the scene into a low-rank background component that captures the static components of the scene, a smooth foreground component that captures the dynamic components of the scene, and a sparse component that can isolate corruptions and other non-idealities. Unlike existing methods, our proposed algorithm produces a panoramic low-rank component that spans the entire field of view, automatically stitching together corrupted data from partially overlapping scenes. The low-rank portion of our robust PCA model is based on a recently discovered optimal low-rank matrix estimator (OptShrink) that requires no parameter tuning. We demonstrate the performance of our algorithm on both static and moving camera videos corrupted by noise and outliers.

Citations (10)

Summary

We haven't generated a summary for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Lightbulb On Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.