Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 63 tok/s
Gemini 2.5 Pro 49 tok/s Pro
GPT-5 Medium 14 tok/s Pro
GPT-5 High 19 tok/s Pro
GPT-4o 100 tok/s Pro
Kimi K2 174 tok/s Pro
GPT OSS 120B 472 tok/s Pro
Claude Sonnet 4 37 tok/s Pro
2000 character limit reached

Latent Spatiotemporal Adaptation for Generalized Face Forgery Video Detection (2309.04795v2)

Published 9 Sep 2023 in cs.CV

Abstract: Face forgery videos have caused severe public concerns, and many detectors have been proposed. However, most of these detectors suffer from limited generalization when detecting videos from unknown distributions, such as from unseen forgery methods. In this paper, we find that different forgery videos have distinct spatiotemporal patterns, which may be the key to generalization. To leverage this finding, we propose a Latent Spatiotemporal Adaptation~(LAST) approach to facilitate generalized face forgery video detection. The key idea is to optimize the detector adaptive to the spatiotemporal patterns of unknown videos in latent space to improve the generalization. Specifically, we first model the spatiotemporal patterns of face videos by incorporating a lightweight CNN to extract local spatial features of each frame and then cascading a vision transformer to learn the long-term spatiotemporal representations in latent space, which should contain more clues than in pixel space. Then by optimizing a transferable linear head to perform the usual forgery detection task on known videos and recover the spatiotemporal clues of unknown target videos in a semi-supervised manner, our detector could flexibly adapt to unknown videos' spatiotemporal patterns, leading to improved generalization. Additionally, to eliminate the influence of specific forgery videos, we pre-train our CNN and transformer only on real videos with two simple yet effective self-supervised tasks: reconstruction and contrastive learning in latent space and keep them frozen during fine-tuning. Extensive experiments on public datasets demonstrate that our approach achieves state-of-the-art performance against other competitors with impressive generalization and robustness.

Citations (1)
List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-Up Questions

We haven't generated follow-up questions for this paper yet.