Emergent Mind

Abstract

Image-to-video (I2V) generation aims to use the initial frame (alongside a text prompt) to create a video sequence. A grand challenge in I2V generation is to maintain visual consistency throughout the video: existing methods often struggle to preserve the integrity of the subject, background, and style from the first frame, as well as ensure a fluid and logical progression within the video narrative. To mitigate these issues, we propose ConsistI2V, a diffusion-based method to enhance visual consistency for I2V generation. Specifically, we introduce (1) spatiotemporal attention over the first frame to maintain spatial and motion consistency, (2) noise initialization from the low-frequency band of the first frame to enhance layout consistency. These two approaches enable ConsistI2V to generate highly consistent videos. We also extend the proposed approaches to show their potential to improve consistency in auto-regressive long video generation and camera motion control. To verify the effectiveness of our method, we propose I2V-Bench, a comprehensive evaluation benchmark for I2V generation. Our automatic and human evaluation results demonstrate the superiority of ConsistI2V over existing methods.

Overview

  • The paper introduces ConsistI2V, a diffusion-based model for producing consistent videos from a single image and text prompt.

  • ConsistI2V includes architectural innovations such as spatiotemporal attention and noise initialization for better visual consistency.

  • Performance is evaluated using I2V-Bench, demonstrating superior consistency and quality over existing methods.

  • The model is a significant step forward for applications needing high visual coherence but requires further optimization for some slow-motion videos.

Introduction

The field of generative AI has seen impressive advancements in text-to-video (T2V) generation. Yet, existing methodologies exhibit limitations in achieving precise control over video content, a crucial aspect for practical applications. The novel work, titled "ConsistI2V: Enhancing Visual Consistency for Image-to-Video Generation," confronts this challenge head-on. The framework it introduces is a diffusion-based model that employs innovative spatiotemporal conditioning mechanisms to bolster visual consistency in video generation from a single given image and textual prompt.

Methodology

ConsistI2V brings to the table two main architectural innovations: the spatiotemporal attention applied over the initial frame and the distinctive method of noise initialization, which captures the low-frequency band of the initial frame to maintain the video's layout consistency. The model intricately weaves spatial and motion consistency throughout video generation by integrating cross-frame attention in spatial layers and local window attention operations in temporal layers. The devised framework, FrameInit, strategically uses low-frequency information during inference for noise initialization, substantially improving video stability and quality.

Evaluation

The authors introduce I2V-Bench, a comprehensive evaluation benchmark that allows for meticulous evaluation of I2V models against a wide array of metrics, capturing aspects such as visual quality and consistency. ConsistI2V's performance is rigorously assessed, both automatically and through human evaluation, across multiple datasets, including I2V-Bench. By dominating over existing methods in a majority of metrics and demonstrating outstanding visual consistency, ConsistI2V establishes itself as a significant contribution to controllable video generation.

Conclusion and Broader Impact

ConsistI2V marks an evolutionary stride in controlled video synthesis, adeptly addressing the need for visual consistency in I2V generation. Future work aims to improve further upon the model's efficacy by exploring more advanced training paradigms and high-quality datasets. Its broader impact lies in its potential for applications that demand high coherence and visual fidelity, such as virtual reality, filmmaking, and animated content creation, although the slower motion in some generated videos signals a need for further optimizations before broad deployment. With this study, the authors set a high bar for future explorations in the domain of AI-driven video generation.

Create an account to read this summary for free:

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.