Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Aleth-NeRF: Illumination Adaptive NeRF with Concealing Field Assumption (2312.09093v3)

Published 14 Dec 2023 in cs.CV

Abstract: The standard Neural Radiance Fields (NeRF) paradigm employs a viewer-centered methodology, entangling the aspects of illumination and material reflectance into emission solely from 3D points. This simplified rendering approach presents challenges in accurately modeling images captured under adverse lighting conditions, such as low light or over-exposure. Motivated by the ancient Greek emission theory that posits visual perception as a result of rays emanating from the eyes, we slightly refine the conventional NeRF framework to train NeRF under challenging light conditions and generate normal-light condition novel views unsupervised. We introduce the concept of a "Concealing Field," which assigns transmittance values to the surrounding air to account for illumination effects. In dark scenarios, we assume that object emissions maintain a standard lighting level but are attenuated as they traverse the air during the rendering process. Concealing Field thus compel NeRF to learn reasonable density and colour estimations for objects even in dimly lit situations. Similarly, the Concealing Field can mitigate over-exposed emissions during the rendering stage. Furthermore, we present a comprehensive multi-view dataset captured under challenging illumination conditions for evaluation. Our code and dataset available at https://github.com/cuiziteng/Aleth-NeRF

Definition Search Book Streamline Icon: https://streamlinehq.com
References (56)
  1. Learning Multi-Scale Photo Exposure Correction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 9157–9167.
  2. Neural Volume Super-Resolution.
  3. Mip-NeRF: A Multiscale Representation for Anti-Aliasing Neural Radiance Fields. arXiv:2103.13415.
  4. Unprocessing Images for Learned Raw Denoising. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
  5. Buchsbaum, G. 1980. A spatial processor model for object colour perception. Journal of the Franklin Institute, 310(1): 1–26.
  6. You Only Need 90K Parameters to Adapt Light: a Light Weight Transformer for Image Enhancement and Exposure Correction. In 33rd British Machine Vision Conference 2022, BMVC 2022, London, UK, November 21-24, 2022. BMVA Press.
  7. Multitask AET With Orthogonal Tangent Regularity for Dark Object Detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2553–2562.
  8. Mobile Computational Photography: A Tour. Annual Review of Vision Science, 7(1): 571–604. PMID: 34524880.
  9. Depth-supervised NeRF: Fewer Views and Faster Training for Free. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
  10. Burst image restoration and enhancement. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 5759–5768.
  11. Deep Burst Denoising. In Proceedings of the European Conference on Computer Vision (ECCV).
  12. Digital Image Processing (3rd Edition). USA: Prentice-Hall, Inc. ISBN 013168728X.
  13. Zero-reference deep curve estimation for low-light image enhancement. In Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR), 1780–1789.
  14. LIME: Low-Light Image Enhancement via Illumination Map Estimation. IEEE Transactions on Image Processing, 26(2): 982–993.
  15. NeRFReN: Neural Radiance Fields With Reflections. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 18409–18418.
  16. Burst photography for high dynamic range and low-light imaging on mobile cameras. ACM Transactions on Graphics (ToG), 35(6): 1–12.
  17. Being and Time. SUNY Series in Contemporary Co. State University of New York Press.
  18. Learning Sample Relationship for Exposure Correction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 9904–9913.
  19. Putting NeRF on a Diet: Semantically Consistent Few-Shot View Synthesis. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 5885–5894.
  20. Enlightengan: Deep light enhancement without paired supervision. IEEE Transactions on Image Processing, 30: 2340–2349.
  21. Unsupervised Night Image Enhancement: When Layer Decomposition Meets Light-Effects Suppression. arXiv preprint arXiv:2207.10564.
  22. HDR-Plenoxels: Self-Calibrating High Dynamic Range Radiance Fields. In ECCV.
  23. Land, E. H. 1986. An Alternative Technique for the Computation of the Designator in the Retinex Theory of Color Vision. Proceedings of the National Academy of Sciences of the United States of America.
  24. SeaThru-NeRF: Neural Radiance Fields in Scattering Media. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 56–65.
  25. AutoInt: Automatic Integration for Fast Neural Volume Rendering. In Proc. CVPR.
  26. Low-Light Video Enhancement with Synthetic Event Guidance. Proceedings of the AAAI Conference on Artificial Intelligence, 37(2): 1692–1700.
  27. MBLLEN: Low-light Image/Video Enhancement Using CNNs. In British Machine Vision Conference.
  28. Neural Radiance Transfer Fields for Relightable Novel-view Synthesis with Global Illumination. In ECCV.
  29. Deblur-NeRF: Neural Radiance Fields from Blurry Images. arXiv preprint arXiv:2111.14292.
  30. Toward Fast, Flexible, and Robust Low-Light Image Enhancement. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 5637–5646.
  31. NeRF in the Wild: Neural Radiance Fields for Unconstrained Photo Collections. In CVPR.
  32. Burst denoising with kernel prediction networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, 2502–2510.
  33. NeRF in the Dark: High Dynamic Range View Synthesis from Noisy Raw Images. arXiv.
  34. NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis. In ECCV.
  35. DeepLPF: Deep Local Parametric Filters for Image Enhancement. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.
  36. Instant Neural Graphics Primitives with a Multiresolution Hash Encoding. ACM Trans. Graph., 41(4): 102:1–102:15.
  37. Learning exposure correction via consistency modeling. In Proc. Brit. Mach. Vision Conf.
  38. NAN: Noise-Aware NeRFs for Burst-Denoising. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 12672–12681.
  39. DeshadowNet: A Multi-context Embedding Deep Network for Shadow Removal. In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2308–2316.
  40. NeRF for Outdoor Scene Relighting. In European Conference on Computer Vision (ECCV).
  41. Plenoxels: Radiance Fields without Neural Networks. In CVPR.
  42. Structure-from-Motion Revisited. In Conference on Computer Vision and Pattern Recognition (CVPR).
  43. Pixelwise View Selection for Unstructured Multi-View Stereo. In European Conference on Computer Vision (ECCV).
  44. Nerv: Neural reflectance and visibility fields for relighting and view synthesis. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 7495–7504.
  45. Ref-NeRF: Structured View-Dependent Appearance for Neural Radiance Fields. CVPR.
  46. NeRF-SR: High-Quality Neural Radiance Fields using Supersampling. arXiv.
  47. Local Color Distributions Prior for Image Enhancement. In Proceedings of the European Conference on Computer Vision (ECCV).
  48. Ultra-High-Definition Low-Light Image Enhancement: A Benchmark and Transformer-Based Method. arXiv preprint arXiv:2212.11548.
  49. Low-Light Image Enhancement with Normalizing Flow. arXiv preprint arXiv:2109.05923.
  50. Deep Retinex Decomposition for Low-Light Enhancement. In British Machine Vision Conference.
  51. HDR-NeRF: High Dynamic Range Neural Radiance Fields. arXiv preprint arXiv:2111.14451.
  52. Implicit Neural Representation for Cooperative Low-light Image Enhancement. arXiv:2303.11722.
  53. PlenOctrees for Real-time Rendering of Neural Radiance Fields. In ICCV.
  54. CycleISP: Real Image Restoration via Improved Data Synthesis. In CVPR.
  55. Learning Temporal Consistency for Low Light Video Enhancement From Single Images. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 4967–4976.
  56. NeRFactor: Neural Factorization of Shape and Reflectance under an Unknown Illumination. ACM Trans. Graph., 40(6).
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Ziteng Cui (18 papers)
  2. Lin Gu (143 papers)
  3. Xiao Sun (99 papers)
  4. Xianzheng Ma (13 papers)
  5. Yu Qiao (563 papers)
  6. Tatsuya Harada (142 papers)
Citations (9)

Summary

  • The paper introduces a concealing field that models light transmission to improve 3D scene rendering under challenging lighting.
  • It employs end-to-end training on SRGB images, eliminating the need for post-processing while ensuring view consistency.
  • The method outperforms standard exposure correction techniques by producing realistic imagery in both low-light and overexposed scenarios.

Introduction

Neural Radiance Fields (NeRF) provide a powerful tool to construct high-fidelity 3D scenes from a set of 2D photographs. The technique is well-regarded for its ability to capture detailed geometry and realistically render novel views. However, a notable drawback of standard NeRF implementations is their limited effectiveness in extreme lighting conditions, such as low light or overexposure. In these challenging environments, NeRF's assumptions can fail, resulting in poor reconstruction and rendering quality.

Enhancing NeRF for Extreme Lighting

To address the challenges in adverse lighting, a new system called Aleth-NeRF has been introduced. Aleth-NeRF differentiates itself by incorporating a "Concealing Field", a novel conception drawn from ancient Greek philosophy, which hypothesizes that the visibility of objects is reduced by a lack of illumination around them rather than intrinsic object properties. The key innovation in Aleth-NeRF is assigning transmittance values to particles, akin to attribute reduced light transmission to the surrounding air, thus modeling how objects might appear under suboptimal lighting conditions.

Methodology

For constructing its improved renditions, Aleth-NeRF estimates the density and color of objects even under low lighting or overexposure by first learning a volumetric scene representation with said concealing fields during training. This enables the system to render these scenes closer to their appearance under normal lighting. For low-light environments, the concealing fields are removed in the rendering stage, revealing the scene as it would appear with adequate illumination. Conversely, for overexposed scenes, concealing fields are added during rendering to compensate for excessive brightness and restore normal lighting appearance.

Performance and Contributions

The performance of Aleth-NeRF is conclusively robust across multiple scenarios. When faced with the complexities of low-light and over-exposed environments:

  1. Aleth-NeRF demonstrates convincing capabilities in enhancing image quality and ensuring 3D consistency across different views.
  2. Compared to existing enhancement and exposure correction techniques and combinations thereof, Aleth-NeRF emerges superior in generating seamless, realistic imagery.
  3. It realizes this without relying on post-processing, instead training end-to-end directly on SRGB images -- both under- and over-exposed.

Additionally, the framework contributes:

  • A challenging illumination multi-view dataset, which includes paired SRGB low-light, normal-light, and over-exposed images.
  • Comparisons with various image enhancement and exposure correction methods, demonstrating Aleth-NeRF's higher quality and consistency in rendering novel views.

Conclusions

In summation, Aleth-NeRF offers a valuable advancement in rendering 3D scenes under challenging lighting conditions, with implications for both image enhancement and numerous potential applications in computer vision and graphics. Despite its novelty and strengths, the system does inherit a limitation from generic NeRF models: the necessity for scene-specific training. Furthermore, the model may face potential difficulties in scenes with non-uniform light or shadows, which are areas earmarked for future improvement. Nevertheless, Aleth-NeRF's approach symbolizes a significant step forward for neural rendering in the presence of real-world lighting variations.

X Twitter Logo Streamline Icon: https://streamlinehq.com