Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

SUCRe: Leveraging Scene Structure for Underwater Color Restoration (2212.09129v3)

Published 18 Dec 2022 in cs.CV

Abstract: Underwater images are altered by the physical characteristics of the medium through which light rays pass before reaching the optical sensor. Scattering and wavelength-dependent absorption significantly modify the captured colors depending on the distance of observed elements to the image plane. In this paper, we aim to recover an image of the scene as if the water had no effect on light propagation. We introduce SUCRe, a novel method that exploits the scene's 3D structure for underwater color restoration. By following points in multiple images and tracking their intensities at different distances to the sensor, we constrain the optimization of the parameters in an underwater image formation model and retrieve unattenuated pixel intensities. We conduct extensive quantitative and qualitative analyses of our approach in a variety of scenarios ranging from natural light to deep-sea environments using three underwater datasets acquired from real-world scenarios and one synthetic dataset. We also compare the performance of the proposed approach with that of a wide range of existing state-of-the-art methods. The results demonstrate a consistent benefit of exploiting multiple views across a spectrum of objective metrics. Our code is publicly available at https://github.com/clementinboittiaux/sucre.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (49)
  1. A Revised Underwater Image Formation Model. In CVPR, pages 6723–6732, 2018.
  2. Sea-Thru: A Method for Removing Water From Underwater Images. In CVPR, pages 1682–1691, 2019.
  3. What Is the Space of Attenuation Coefficients in Underwater Computer Vision? In CVPR, pages 4931–4940, 2017.
  4. Enhancing underwater images and videos by fusion. In CVPR, pages 81–88, 2012.
  5. Locally Adaptive Color Correction for Underwater Image Dehazing and Matching. In CVPR Workshops, pages 997–1005, 2017.
  6. NetVLAD: CNN Architecture for Weakly Supervised Place Recognition. In CVPR, pages 5297–5307, 2016.
  7. Color Restoration of Underwater Images. In BMVC, pages 44.1–44.12, 2017.
  8. Underwater Single Image Color Restoration Using Haze-Lines and a New Quantitative Dataset. TPAMI, 43(8):2822–2837, 2021.
  9. Eiffel Tower: A deep-sea underwater dataset for long-term visual localization. IJRR, 2023.
  10. Colour-Consistent Structure-from-Motion Models using Underwater Imagery. In RSS, 2012.
  11. True Color Correction of Autonomous Underwater Vehicle Imagery. JFR, 33(6):853–874, 2015.
  12. G. Buchsbaum. A spatial processor model for object colour perception. Journal of the Franklin Institute, 310(1):1–26, 1980.
  13. A surface reconstruction method for in-detail underwater 3D optical mapping. IJRR, 34(1):64–89, 2015.
  14. Underwater Image Enhancement by Wavelength Compensation and Dehazing. TIP, 21(4):1756–1769, 2012.
  15. SuperPoint: Self-Supervised Interest Point Detection and Description. In CVPR Workshops, pages 224–236, 2018.
  16. Depth map color constancy. Bio-Algorithms and Med-Systems, 9(4):167–177, 2013.
  17. AQUALOC: An underwater dataset for visual–inertial–pressure localization. IJRR, 38(14):1549–1559, 2019.
  18. Single image haze removal using dark channel prior. TPAMI, 33(12):2341–2353, 2010.
  19. A novel dark channel prior guided variational framework for underwater image restoration. Journal of Visual Communication and Image Representation, 66:102732, 2020.
  20. Fast Underwater Image Enhancement for Improved Visual Perception. RA-L, 5(2):3227–3234, 2020.
  21. Jules Jaffe. Computer modeling and the design of optimal underwater imaging systems. IEEE Journal of Oceanic Engineering, 15(2):101–111, 1990.
  22. Underwater Image Enhancement Quality Evaluation: Benchmark Dataset and Objective Metric. IEEE Trans. Circuit Syst. Video Technol., 32(9):5959–5974, 2022.
  23. Generation and visualization of large-scale three-dimensional reconstructions from underwater robotic surveys. JFR, 27(1):21–51, 2010.
  24. Adam: A Method for Stochastic Optimization. In ICLR, 2015.
  25. SeaThru-NeRF: Neural Radiance Fields in Scattering Media. In CVPR, pages 56–65, 2023.
  26. Single underwater image restoration by blue-green channels dehazing and red channel correction. In ICASSP, pages 1731–1735, 2016.
  27. An Underwater Image Enhancement Benchmark Dataset and Beyond. TIP, 29:4376–4389, 2020.
  28. Underwater Image Enhancement via Medium Transmission-Guided Multi-Color Space Embedding. TIP, 30:4985–5000, 2021.
  29. WaterGAN: Unsupervised Generative Network to Enable Real-Time Color Correction of Monocular Underwater Images. RA-L, 3(1):387–394, 2018.
  30. Twin Adversarial Contrastive Learning for Underwater Image Enhancement and Beyond. TIP, 31:4922–4936, 2022.
  31. Eiffel Tower hydrothermal chimney (Lucky Srike Hydrothermal Field, Mid Atlantic Ridge): 3D scene and imagery, 2015.
  32. B. L. McGlamery. A Computer Model For Underwater Camera Systems. In Ocean Optics VI, pages 221–231, 1980.
  33. NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis. In ECCV, 2020.
  34. In-Situ Joint Light and Medium Estimation for Underwater Color Restoration. In ICCV Workshops, pages 3724–3733, 2021.
  35. Human-Visual-System-Inspired Underwater Image Quality Measures. IEEE Journal of Oceanic Engineering, 41(3):541–551, 2016.
  36. Underwater Image Color Correction Using Ensemble Colorization Network. In ICCV Workshops, pages 2011–2020, 2021.
  37. Large Area 3-D Reconstructions From Underwater Optical Surveys. IEEE Journal of Oceanic Engineering, 34(2):150–169, 2009.
  38. From Coarse to Fine: Robust Hierarchical Localization at Large Scale. In CVPR, pages 12716–12725, 2019.
  39. SuperGlue: Learning Feature Matching with Graph Neural Networks. In CVPR, pages 4938–4947, 2020.
  40. Recovery of underwater visibility and structure by polarization analysis. IEEE Journal of Oceanic Engineering, 30(3):570–587, 2005.
  41. Structure-from-Motion Revisited. In CVPR, pages 4104–4113, 2016.
  42. WaterNeRF: Neural Radiance Fields for Underwater Scenes. arXiv preprint, 2022.
  43. The CIEDE2000 color-difference formula: Implementation notes, supplementary test data, and mathematical observations. Color Research & Application, 30(1):21–30, 2005.
  44. Automatic color correction for 3D reconstruction of underwater scenes. In ICRA, pages 5140–5147, 2017.
  45. Inherent optical properties of Jerlov water types. Applied Optics, 54(17):5392–5401, 2015.
  46. Optical Imaging and Image Restoration Techniques for Deep Ocean Mapping: A Comprehensive Survey. PFG – Journal of Photogrammetry, Remote Sensing and Geoinformation Science, 90(3):243–267, 2022.
  47. An Underwater Color Image Quality Evaluation Metric. TIP, 24(12):6062–6071, 2015.
  48. Underwater image restoration via depth map and illumination estimation based on a single image. Optics Express, 29(19):29864–29886, 2021.
  49. The VAROS Synthetic Underwater Data Set: Towards Realistic Multi-Sensor Underwater Data With Ground Truth. In ICCV Workshops, pages 3722–3730, 2021.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Clémentin Boittiaux (5 papers)
  2. Ricard Marxer (21 papers)
  3. Claire Dune (4 papers)
  4. Aurélien Arnaubec (5 papers)
  5. Maxime Ferrera (9 papers)
  6. Vincent Hugel (5 papers)
Citations (3)

Summary

  • The paper introduces SUCRe, a novel method that leverages multi-view scene structure to tackle underwater color distortions.
  • It employs Structure-from-Motion and least squares optimization to accurately estimate parameters in an adapted image formation model.
  • Empirical results show that SUCRe outperforms traditional methods with superior PSNR and SSIM metrics on both synthetic and real-world datasets.

An Expert Overview of SUCRe: Leveraging Scene Structure for Underwater Color Restoration

The research paper proposes SUCRe, a sophisticated method for underwater image restoration that leverages multi-view scene structure to eliminate the distortions caused by water on light propagation. The SUCRe approach aims to restore the image as if the underwater effects—primarily scattering and wavelength-dependent absorption—were absent, transforming the image to reveal the original colors and details typical of non-aquatic scenes.

Core Contributions

SUCRe's uniqueness lies in exploiting multiple observational perspectives to create a well-posed problem for optimizing an underwater image formation model (UIFM). This multi-view method contrasts with traditional single-image approaches, which often face inherent ambiguities due to limited data availability. The authors utilize Structure-from-Motion (SfM) to enhance estimation accuracy of physical parameters related to underwater imaging, thus generating restored images with a more faithful representation of the original scene's color and texture.

Methodological Insights

Key to SUCRe is an innovative multi-view framework that tracks pixel intensities across various images using SfM techniques. Parameters for an adapted image formation model are derived from these observations, accounting for varying sensor distances, which refines the model's capacity to produce illumination effects akin to an unobstructed view. The derivation of the backscatter and attenuation coefficients is significantly improved through this approach, bypassing the need for exhaustive assumptions necessary in single-image models.

To accomplish the restoration, SUCRe implements a least squares optimization of the UIFM parameters across multiple images. This process is iteratively refined using a gradient descent method, optimizing against a comprehensive set of pairing observations. Such an approach allows for the effective recovery of color information attenuated in low-contrast areas, typically challenging for methods employing a single perspective.

Performance and Benchmarking

Empirical validation of SUCRe's effectiveness involves extensive testing on synthetic datasets and real-world underwater survey data. The results indicate that SUCRe outperforms leading methodologies in the restoration quality as quantified by metrics like PSNR and SSIM on reference datasets such as Varos and Sea-thru D5. Additionally, SUCRe consistently exhibits less variance in error metrics regardless of sensor distance, underscoring the robustness facilitated by its multi-view exploitation.

Implications and Future Directions

The implications of SUCRe extend beyond just enhanced visual clarity for biologically and ecologically significant subsea photography. This research could facilitate new training sets for deep learning models, potentially improving single-image approaches by leveraging SUCRe-processed images as more reliable reference standards.

The SUCRe methodology opens several avenues for future research. Notably, while the paper addresses specific limitations like the consistent water properties assumption, further studies could develop dynamic models to handle temporally or spatially varying underwater conditions, enhancing the model's ecological validity. Moreover, augmenting the approach with deep learning could potentially automate and speed up the restoration process, applying learned models to predict parameters and refine outputs.

In conclusion, SUCRe delivers significant advancements in underwater image restoration by leveraging technological advancements in multi-view geometry and image modeling. As such, it holds promising potential both for enhancing practical underwater exploration and for advancing theoretical understanding of light-abating phenomena.

Github Logo Streamline Icon: https://streamlinehq.com
X Twitter Logo Streamline Icon: https://streamlinehq.com
Youtube Logo Streamline Icon: https://streamlinehq.com