Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 163 tok/s
Gemini 2.5 Pro 50 tok/s Pro
GPT-5 Medium 36 tok/s Pro
GPT-5 High 35 tok/s Pro
GPT-4o 125 tok/s Pro
Kimi K2 208 tok/s Pro
GPT OSS 120B 445 tok/s Pro
Claude Sonnet 4.5 36 tok/s Pro
2000 character limit reached

Beyond Deepfake Images: Detecting AI-Generated Videos (2404.15955v1)

Published 24 Apr 2024 in cs.CV

Abstract: Recent advances in generative AI have led to the development of techniques to generate visually realistic synthetic video. While a number of techniques have been developed to detect AI-generated synthetic images, in this paper we show that synthetic image detectors are unable to detect synthetic videos. We demonstrate that this is because synthetic video generators introduce substantially different traces than those left by image generators. Despite this, we show that synthetic video traces can be learned, and used to perform reliable synthetic video detection or generator source attribution even after H.264 re-compression. Furthermore, we demonstrate that while detecting videos from new generators through zero-shot transferability is challenging, accurate detection of videos from a new generator can be achieved through few-shot learning.

Citations (4)

Summary

  • The paper demonstrates that AI-generated videos require detection methods beyond image-based techniques due to distinct embedded traces.
  • The paper introduces a novel approach that reliably attributes videos to their generator even after H.264 compression.
  • The paper shows that while zero-shot transferability struggles with unseen generators, few-shot learning significantly boosts detection accuracy.

The paper "Beyond Deepfake Images: Detecting AI-Generated Videos" explores the challenges and methods for detecting videos created by generative AI, beyond the field of deepfake images. The authors identify that existing techniques for detecting AI-generated images fall short when applied to synthetic videos. This discrepancy arises because the traces left by video generators differ significantly from those left by image generators.

Key findings and contributions of the paper include:

  1. Difference in Traces: The paper highlights that the artifacts and traces embedded within AI-generated videos vary from those in still images. This necessitates the development of specialized detection methodologies tailored for video content.
  2. Detection and Attribution: The authors propose a novel approach to learn and recognize these unique video traces, which allows for reliable detection of synthetic videos. This capability extends to attributing the video to its generator, even after the video has undergone H.264 compression—a common format for reducing video file sizes.
  3. Challenges with Zero-Shot Transferability: The paper reveals difficulties in detecting synthetic videos from unseen generators through zero-shot transfer techniques. This indicates that models trained on one set of generative tools do not easily adapt to unknown or new video generators.
  4. Few-Shot Learning for New Generators: Although zero-shot transferability is challenging, the paper demonstrates success in training models to accurately detect videos from new generators using few-shot learning. This approach involves training the model using a limited set of examples from the new generator, which significantly enhances its detection capability.

Overall, this research provides critical insights into the complexities of AI-generated video detection and suggests promising methodologies for improving detection accuracy in the face of evolving generative technologies.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.