Emergent Mind

DeblurGS: Gaussian Splatting for Camera Motion Blur

(2404.11358)
Published Apr 17, 2024 in cs.CV

Abstract

Although significant progress has been made in reconstructing sharp 3D scenes from motion-blurred images, a transition to real-world applications remains challenging. The primary obstacle stems from the severe blur which leads to inaccuracies in the acquisition of initial camera poses through Structure-from-Motion, a critical aspect often overlooked by previous approaches. To address this challenge, we propose DeblurGS, a method to optimize sharp 3D Gaussian Splatting from motion-blurred images, even with the noisy camera pose initialization. We restore a fine-grained sharp scene by leveraging the remarkable reconstruction capability of 3D Gaussian Splatting. Our approach estimates the 6-Degree-of-Freedom camera motion for each blurry observation and synthesizes corresponding blurry renderings for the optimization process. Furthermore, we propose Gaussian Densification Annealing strategy to prevent the generation of inaccurate Gaussians at erroneous locations during the early training stages when camera motion is still imprecise. Comprehensive experiments demonstrate that our DeblurGS achieves state-of-the-art performance in deblurring and novel view synthesis for real-world and synthetic benchmark datasets, as well as field-captured blurry smartphone videos.

Comparison of deblurring methods on images with real motion blur, referencing Ma's research.

Overview

  • DeblurGS introduces a novel framework that significantly advances the deblurring of images affected by camera motion using 3D Gaussian Splatting, which reconstructs sharp 3D scenes from blurred images.

  • The technique is robust against inaccuracies in initial camera poses and incorporates features like Gaussian Densification Annealing to optimize fine detail recovery without extensive training datasets.

  • The method shows practical utility in AR/VR, autonomous navigation, and video analysis by improving accuracy in object and scene reconstructions.

  • DeblurGS sets a new benchmark in image deblurring and 3D scene reconstruction, proving effective in experimental and real-world environments and suggesting various future research directions.

DeblurGS: Advancing Camera Motion Deblurring with 3D Gaussian Splatting

Improved Deblurring with 3D Gaussian Splatting

The paper presents DeblurGS, a framework that significantly advances the deblurring of images distorted by camera motion by using 3D Gaussian Splatting (3DGS). This novel approach effectively reconstructs sharp 3D scenes from motion-blurred images, addressing the limitations inherent in existing deblurring methods which rely on precise initial camera poses achievable generally through Structure-from-Motion (SfM) techniques. DeblurGS enhances the 3DGS model to handle noisy initial camera poses and introduces a Gaussian Densification Annealing strategy, optimizing the recovery of fine details without the need for large-scale training datasets.

Capabilities and Innovations

DeblurGS incorporates several key innovations and technical implementations:

  • 3D Gaussian Splatting Adaptation: Utilizes the 3DGS framework to achieve photo-realistic, sharp reconstructions from blurred observations, circumventing the limitations posed by conventional NeRF implementations when dealing with blurry inputs.
  • Camera Motion Estimation with Blurry Renderings: The approach is designed to simulate physical blur dynamically by estimating camera motion parameters to reproduce blurry renderings that align with captured blurry images.
  • Gaussian Densification Annealing: This technique mitigates premature densification in Gaussian splatting, ensuring that the model focuses on optimizing camera motion prior to fine-detail reconstructions. It prevents inaccuracies due to noisy initial camera poses.
  • Sub-frame Alignment and Optimization: Proposes sub-frame alignment parameters which ensure that blurred images synthesized during training align with actual camera motions, refining the deblurring process.
  • Robust to Initial Pose Errors: Demonstrates effective optimization even when initial camera poses are derived from blurred images only, making it practical for real-world applications where high-quality initial poses are unobtainable.

Practical Implications and Theoretical Contributions

The findings from DeblurGS extend practical applications significantly:

  • Improved AR/VR, Autonomous Navigation, and Video Analysis: Enhanced ability to reconstruct sharp scenes from blurred footage benefits applications demanding high accuracy in object and scene reconstructions.
  • Adaptability and Scalability in Deblurring Tasks: Flexibility in handling inaccurate inputs unlike prior dependency on highly accurate SfM results; therefore, extends utility in diverse operational environments.

Theoretically, DeblurGS extends the understanding of how 3D reconstruction techniques can be adapted and optimized for handling real-world complexities, such as motion blur, without relying on large-scale annotated data.

Future Directions

DeblurGS opens numerous avenues for future research:

  • Further Optimizations in 3DGS: Exploration into more efficient and robust techniques in Gaussian splatting that could further enhance the speed and accuracy.
  • Integration with Other Vision and AI Tasks: Potential cross-utilization with tasks like object detection and tracking, where motion blur is a common issue.
  • Handling of Other Blur Types: Adapting DeblurGS to handle other types of blur, such as out-of-focus blur or multi-motion blur, could widen its applicability.

Conclusion

The DeblurGS presents a significant advancement in deblurring techniques, showcasing the ability to reconstruct accurate and detailed 3D scenes from poorly captured images. Its robust performance, particularly in handling incorrect initializations and adapting to real-world applications, sets a new state-of-the-art in the field of image deblurring and 3D scene reconstruction. The method's success in experimental setups and practical scenarios alike promises substantial developments in both theoretical understanding and practical applications in computer vision.

Create an account to read this summary for free:

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.