Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
169 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Neural Radiance Field-based Visual Rendering: A Comprehensive Review (2404.00714v1)

Published 31 Mar 2024 in cs.CV

Abstract: In recent years, Neural Radiance Fields (NeRF) has made remarkable progress in the field of computer vision and graphics, providing strong technical support for solving key tasks including 3D scene understanding, new perspective synthesis, human body reconstruction, robotics, and so on, the attention of academics to this research result is growing. As a revolutionary neural implicit field representation, NeRF has caused a continuous research boom in the academic community. Therefore, the purpose of this review is to provide an in-depth analysis of the research literature on NeRF within the past two years, to provide a comprehensive academic perspective for budding researchers. In this paper, the core architecture of NeRF is first elaborated in detail, followed by a discussion of various improvement strategies for NeRF, and case studies of NeRF in diverse application scenarios, demonstrating its practical utility in different domains. In terms of datasets and evaluation metrics, This paper details the key resources needed for NeRF model training. Finally, this paper provides a prospective discussion on the future development trends and potential challenges of NeRF, aiming to provide research inspiration for researchers in the field and to promote the further development of related technologies.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (211)
  1. Y. Yao, Z. Luo, S. Li, T. Fang, and L. Quan, “Mvsnet: Depth inference for unstructured multi-view stereo,” in Proceedings of the European Conference on Computer Vision (ECCV), 2018, pp. 767–783.
  2. E. H. Adelson and J. R. Bergen, “Spatiotemporal energy models for the perception of motion,” Josa a, vol. 2, no. 2, pp. 284–299, 1985.
  3. J. Sun, M. Smith, L. Smith, S. Midha, and J. Bamber, “Object surface recovery using a multi-light photometric stereo technique for non-Lambertian surfaces subject to shadows and specularities,” Image and Vision Computing, vol. 25, no. 7, pp. 1050–1057, 2007.
  4. Tankus, Sochen, and Yeshurun, “A new perspective [on] Shape-from-Shading,” in Proceedings Ninth IEEE International Conference on Computer Vision.   IEEE, 2003, pp. 862–869.
  5. R. W. Gerchberg and A. S. W. O., “A practical algorithm for the determination of phase from image and diffraction plane pictures,” Optik, vol. 35, pp. 237–250, 1972.
  6. M. Asada, H. Ichikawa, and S. Tsuji, “Determining surface orientation by projecting a stripe pattern,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 10, no. 5, pp. 749–754, 1988.
  7. W. E. Lorensen and H. E. Cline, “Marching cubes: A high resolution 3D surface construction algorithm,” in Seminal Graphics: Pioneering Efforts That Shaped the Field, 1998, pp. 347–353.
  8. G. M. Nielson, “On marching cubes,” IEEE Transactions on visualization and computer graphics, vol. 9, no. 3, pp. 283–297, 2003.
  9. R. Shu, C. Zhou, and M. S. Kankanhalli, “Adaptive marching cubes,” The Visual Computer, vol. 11, pp. 202–217, 1995.
  10. D. Cohen-Or and A. Kaufman, “Fundamentals of surface voxelization,” Graphical models and image processing, vol. 57, no. 6, pp. 453–461, 1995.
  11. Z. Dong, W. Chen, H. Bao, H. Zhang, and Q. Peng, “Real-time voxelization for complex polygonal models,” in 12th Pacific Conference on Computer Graphics and Applications, 2004. PG 2004. Proceedings.   IEEE, 2004, pp. 43–50.
  12. E. Eisemann and X. Décoret, “Fast scene voxelization and applications,” in Proceedings of the 2006 Symposium on Interactive 3D Graphics and Games, 2006, pp. 71–78.
  13. N. Snavely, “Bundler: Structure from motion (sfm) for unordered image collections,” http://phototour. cs. washington. edu/bundler/, 2008.
  14. S. Ullman, “The interpretation of structure from motion,” Proceedings of the Royal Society of London. Series B. Biological Sciences, vol. 203, no. 1153, pp. 405–426, 1979.
  15. J. Iglhaut, C. Cabo, S. Puliti, L. Piermattei, J. O’Connor, and J. Rosette, “Structure from motion photogrammetry in forestry: A review,” Current Forestry Reports, vol. 5, pp. 155–168, 2019.
  16. O. Özyeşil, V. Voroninski, R. Basri, and A. Singer, “A survey of structure from motion*.” Acta Numerica, vol. 26, pp. 305–364, 2017.
  17. J. L. Schonberger and J.-M. Frahm, “Structure-from-motion revisited,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 4104–4113.
  18. G. Lowe, “Sift-the scale invariant feature transform,” Int. J, vol. 2, no. 91-110, p. 2, 2004.
  19. H. Bay, T. Tuytelaars, and L. Van Gool, “Surf: Speeded up robust features,” in Computer Vision–ECCV 2006: 9th European Conference on Computer Vision, Graz, Austria, May 7-13, 2006. Proceedings, Part I 9.   Springer, 2006, pp. 404–417.
  20. P. J. Besl and N. D. McKay, “Method for registration of 3-D shapes,” in Sensor Fusion IV: Control Paradigms and Data Structures, vol. 1611.   Spie, 1992, pp. 586–606.
  21. J. Zhang, Y. Yao, and B. Deng, “Fast and robust iterative closest point,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 44, no. 7, pp. 3450–3466, 2021.
  22. R. B. Rusu and S. Cousins, “3d is here: Point cloud library (pcl),” in 2011 IEEE International Conference on Robotics and Automation.   IEEE, 2011, pp. 1–4.
  23. B. Mildenhall, P. P. Srinivasan, M. Tancik, J. T. Barron, R. Ramamoorthi, and R. Ng, “Nerf: Representing scenes as neural radiance fields for view synthesis,” Communications of the ACM, vol. 65, no. 1, pp. 99–106, 2021.
  24. A. Tewari, J. Thies, B. Mildenhall, P. Srinivasan, E. Tretschk, W. Yifan, C. Lassner, V. Sitzmann, R. Martin-Brualla, S. Lombardi, T. Simon, C. Theobalt, M. Nießner, J. T. Barron, G. Wetzstein, M. Zollhöfer, and V. Golyanik, “Advances in Neural Rendering,” Computer Graphics Forum, vol. 41, no. 2, pp. 703–735, May 2022.
  25. Y. CHANG and M. GAI, “A review on neural radiance fields based view synthesis,” Journal of Graphics, vol. 42, no. 3, pp. 376–384, 2021.
  26. K. Gao, Y. Gao, H. He, D. Lu, L. Xu, and J. Li, “Nerf: Neural radiance field in 3d vision, a comprehensive review,” arXiv preprint arXiv:2210.00379, 2022.
  27. F. Zhu, S. Guo, L. Song, K. Xu, J. Hu et al., “Deep review and analysis of recent NeRFs,” APSIPA Transactions on Signal and Information Processing, vol. 12, no. 1, 2023.
  28. V. Croce, G. Caroti, L. De Luca, A. Piemonte, and P. Véron, “Neural radiance fields (nerf): Review and potential applications to digital cultural heritage,” The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, vol. 48, pp. 453–460, 2023.
  29. H. CHENG, S. WANG, M. LI, L.-m. QIN, and Z. Fang, “A review of neural radiance field for autonomous driving scene,” Journal of Graphics, vol. 44, no. 6, pp. 1091–1103, 2023.
  30. F. Remondino, A. Karami, Z. Yan, G. Mazzacca, S. Rigon, and R. Qin, “A critical analysis of NeRF-based 3d reconstruction,” Remote Sensing, vol. 15, no. 14, p. 3585, 2023.
  31. K. Yang, Y. Cheng, Z. Chen, and J. Wang, “SLAM Meets NeRF: A Survey of Implicit SLAM Methods,” World Electric Vehicle Journal, vol. 15, no. 3, p. 85, 2024.
  32. J. T. Kajiya and B. P. Von Herzen, “Ray tracing volume densities,” ACM SIGGRAPH computer graphics, vol. 18, no. 3, pp. 165–174, 1984.
  33. B. Mildenhall, P. P. Srinivasan, R. Ortiz-Cayon, N. K. Kalantari, R. Ramamoorthi, R. Ng, and A. Kar, “Local light field fusion: Practical view synthesis with prescriptive sampling guidelines,” ACM Transactions on Graphics (TOG), vol. 38, no. 4, pp. 1–14, 2019.
  34. S. Song, F. Yu, A. Zeng, A. X. Chang, M. Savva, and T. Funkhouser, “Semantic scene completion from a single depth image,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 1746–1754.
  35. W. Qiu, F. Zhong, Y. Zhang, S. Qiao, Z. Xiao, T. S. Kim, and Y. Wang, “Unrealcv: Virtual worlds for computer vision,” in Proceedings of the 25th ACM international conference on Multimedia, 2017, pp. 1221–1224.
  36. J. T. Barron, B. Mildenhall, D. Verbin, P. P. Srinivasan, and P. Hedman, “Mip-nerf 360: Unbounded anti-aliased neural radiance fields,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 5470–5479.
  37. J. T. Barron, B. Mildenhall, M. Tancik, P. Hedman, R. Martin-Brualla, and P. P. Srinivasan, “Mip-nerf: A multiscale representation for anti-aliasing neural radiance fields,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 5855–5864.
  38. G. Wang, Z. Chen, C. C. Loy, and Z. Liu, “Sparsenerf: Distilling depth ranking for few-shot novel view synthesis,” arXiv preprint arXiv:2303.16196, 2023.
  39. T. Neff, P. Stadlbauer, M. Parger, A. Kurz, J. H. Mueller, C. R. A. Chaitanya, A. Kaplanyan, and M. Steinberger, “DONeRF: Towards Real-Time rendering of compact neural radiance fields using depth oracle networks,” Computer Graphics Forum, pp. 45–59, Jul. 2021.
  40. A. Knapitsch, J. Park, Q.-Y. Zhou, and V. Koltun, “Tanks and temples: Benchmarking large-scale scene reconstruction,” ACM Transactions on Graphics (ToG), vol. 36, no. 4, pp. 1–13, 2017.
  41. H. Aanæs, R. R. Jensen, G. Vogiatzis, E. Tola, and A. B. Dahl, “Large-scale data for multiple-view stereopsis,” International Journal of Computer Vision, vol. 120, pp. 153–168, 2016.
  42. M. Burri, J. Nikolic, P. Gohl, T. Schneider, J. Rehder, S. Omari, M. W. Achtelik, and R. Siegwart, “The EuRoC micro aerial vehicle datasets,” The International Journal of Robotics Research, pp. 1157–1163, Sep. 2016.
  43. J. Straub, T. Whelan, L. Ma, Y. Chen, E. Wijmans, S. Green, J. J. Engel, R. Mur-Artal, C. Ren, and S. Verma, “The Replica dataset: A digital replica of indoor spaces,” arXiv preprint arXiv:1906.05797, 2019.
  44. Y. Yao, Z. Luo, S. Li, J. Zhang, Y. Ren, L. Zhou, T. Fang, and L. Quan, “Blendedmvs: A large-scale dataset for generalized multi-view stereo networks,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 1790–1799.
  45. J. Collins, S. Goel, K. Deng, A. Luthra, L. Xu, E. Gundogdu, X. Zhang, T. F. Y. Vicente, T. Dideriksen, and H. Arora, “Abo: Dataset and benchmarks for real-world 3d object understanding,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 21 126–21 136.
  46. J. Reizenstein, R. Shapovalov, P. Henzler, L. Sbordone, P. Labatut, and D. Novotny, “Common objects in 3d: Large-scale learning and evaluation of real-life 3d category reconstruction,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 10 901–10 911.
  47. H. Fu, B. Cai, L. Gao, L.-X. Zhang, J. Wang, C. Li, Q. Zeng, C. Sun, R. Jia, and B. Zhao, “3d-front: 3d furnished rooms with layouts and semantics,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 10 933–10 942.
  48. J. McCormac, A. Handa, S. Leutenegger, and A. J. Davison, “Scenenet rgb-d: 5m photorealistic images of synthetic indoor trajectories with ground truth,” arXiv preprint arXiv:1612.05079, 2016.
  49. H. Zhu, W. Wu, W. Zhu, L. Jiang, S. Tang, L. Zhang, Z. Liu, and C. C. Loy, “CelebV-HQ: A large-scale video facial attributes dataset,” in European Conference on Computer Vision.   Springer, 2022, pp. 650–667.
  50. C.-H. Lee, Z. Liu, L. Wu, and P. Luo, “Maskgan: Towards diverse and interactive facial image manipulation,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 5549–5558.
  51. A. Nagrani, J. S. Chung, and A. Zisserman, “Voxceleb: A large-scale speaker identification dataset,” arXiv preprint arXiv:1706.08612, 2017.
  52. G. B. Huang, M. Mattar, T. Berg, and E. Learned-Miller, “Labeled faces in the wild: A database forstudying face recognition in unconstrained environments,” in Workshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition, 2008.
  53. X. Zhang, Y. Sugano, M. Fritz, and A. Bulling, “Mpiigaze: Real-world dataset and deep appearance-based gaze estimation,” IEEE transactions on pattern analysis and machine intelligence, vol. 41, no. 1, pp. 162–175, 2017.
  54. K. Krafka, A. Khosla, P. Kellnhofer, H. Kannan, S. Bhandarkar, W. Matusik, and A. Torralba, “Eye tracking for everyone,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 2176–2184.
  55. T. Karras, S. Laine, and T. Aila, “A style-based generator architecture for generative adversarial networks,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, pp. 4401–4410.
  56. Z. Zheng, T. Yu, Y. Wei, Q. Dai, and Y. Liu, “Deephuman: 3d human reconstruction from a single image,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2019, pp. 7739–7749.
  57. Z. Cai, D. Ren, A. Zeng, Z. Lin, T. Yu, W. Wang, X. Fan, Y. Gao, Y. Yu, and L. Pan, “Humman: Multi-modal 4d human dataset for versatile sensing and modeling,” in European Conference on Computer Vision.   Springer, 2022, pp. 557–577.
  58. C. Ionescu, D. Papava, V. Olaru, and C. Sminchisescu, “Human3. 6m: Large scale datasets and predictive methods for 3d human sensing in natural environments,” IEEE transactions on pattern analysis and machine intelligence, vol. 36, no. 7, pp. 1325–1339, 2013.
  59. B. L. Bhatnagar, G. Tiwari, C. Theobalt, and G. Pons-Moll, “Multi-garment net: Learning to dress 3d people from images,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2019, pp. 5420–5430.
  60. L. Zheng, Z. Bie, Y. Sun, J. Wang, C. Su, S. Wang, and Q. Tian, “Mars: A video benchmark for large-scale person re-identification,” in Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11-14, 2016, Proceedings, Part VI 14.   Springer, 2016, pp. 868–884.
  61. G. Moon, S.-I. Yu, H. Wen, T. Shiratori, and K. M. Lee, “Interhand2. 6m: A dataset and baseline for 3d interacting hand pose estimation from a single rgb image,” in Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XX 16.   Springer, 2020, pp. 548–564.
  62. W. Wang, D. Zhu, X. Wang, Y. Hu, Y. Qiu, C. Wang, Y. Hu, A. Kapoor, and S. Scherer, “Tartanair: A dataset to push the limits of visual slam,” in 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).   IEEE, 2020, pp. 4909–4916.
  63. J. Xiao, A. Owens, and A. Torralba, “Sun3d: A database of big spaces reconstructed using sfm and object labels,” in Proceedings of the IEEE International Conference on Computer Vision, 2013, pp. 1625–1632.
  64. R. Zhang, P. Isola, A. A. Efros, E. Shechtman, and O. Wang, “The unreasonable effectiveness of deep features as a perceptual metric,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 586–595.
  65. Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: From error visibility to structural similarity,” IEEE transactions on image processing, vol. 13, no. 4, pp. 600–612, 2004.
  66. J. Sturm, N. Engelhard, F. Endres, W. Burgard, and D. Cremers, “A benchmark for the evaluation of RGB-D SLAM systems,” in 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems.   IEEE, 2012, pp. 573–580.
  67. K. Ding, K. Ma, S. Wang, and E. P. Simoncelli, “Image quality assessment: Unifying structure and texture similarity,” IEEE transactions on pattern analysis and machine intelligence, vol. 44, no. 5, pp. 2567–2581, 2020.
  68. D. Verbin, P. Hedman, B. Mildenhall, T. Zickler, J. T. Barron, and P. P. Srinivasan, “Ref-nerf: Structured view-dependent appearance for neural radiance fields,” in 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).   IEEE, 2022, pp. 5481–5490.
  69. J. Shen, A. Ruiz, A. Agudo, and F. Moreno-Noguer, “Stochastic neural radiance fields: Quantifying uncertainty in implicit 3d representations,” in 2021 International Conference on 3D Vision (3DV).   IEEE, 2021, pp. 972–981.
  70. J. Shen, A. Agudo, F. Moreno-Noguer, and A. Ruiz, “Conditional-flow nerf: Accurate 3d modelling with reliable uncertainty quantification,” in European Conference on Computer Vision.   Springer, 2022, pp. 540–557.
  71. A. Loquercio, M. Segu, and D. Scaramuzza, “A general framework for uncertainty estimation in deep learning,” IEEE Robotics and Automation Letters, vol. 5, no. 2, pp. 3153–3160, 2020.
  72. B. Lakshminarayanan, A. Pritzel, and C. Blundell, “Simple and scalable predictive uncertainty estimation using deep ensembles,” Advances in neural information processing systems, vol. 30, 2017.
  73. Z. Xie, X. Yang, Y. Yang, Q. Sun, Y. Jiang, H. Wang, Y. Cai, and M. Sun, “S3im: Stochastic structural similarity and its unreasonable effectiveness for neural fields,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023, pp. 18 024–18 034.
  74. Q. Li, F. Li, J. Guo, and Y. Guo, “UHDNeRF: Ultra-High-Definition Neural Radiance Fields,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023, pp. 23 097–23 108.
  75. P. Wang, L. Zhao, R. Ma, and P. Liu, “BAD-NeRF: Bundle Adjusted Deblur Neural Radiance Fields,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 4170–4179.
  76. C. Yang, P. Li, Z. Zhou, S. Yuan, B. Liu, X. Yang, W. Qiu, and W. Shen, “NeRFVS: Neural Radiance Fields for Free View Synthesis via Geometry Scaffolds,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 16 549–16 558.
  77. X. Huang, W. Li, J. Hu, H. Chen, and Y. Wang, “RefSR-NeRF: Towards High Fidelity and Super Resolution View Synthesis,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 8244–8253.
  78. K. Zhou, W. Li, Y. Wang, T. Hu, N. Jiang, X. Han, and J. Lu, “NeRFLiX: High-Quality Neural View Synthesis by Learning a Degradation-Driven Inter-viewpoint MiXer,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 12 363–12 374.
  79. K. Zhou, W. Li, N. Jiang, X. Han, and J. Lu, “From NeRFLiX to NeRFLiX++: A General NeRF-Agnostic Restorer Paradigm,” arXiv preprint arXiv:2306.06388, 2023.
  80. S. Seo, Y. Chang, and N. Kwak, “Flipnerf: Flipped reflection rays for few-shot novel view synthesis,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023, pp. 22 883–22 893.
  81. H. Chen, J. Gu, A. Chen, W. Tian, Z. Tu, L. Liu, and H. Su, “Single-Stage Diffusion NeRF: A Unified Approach to 3D Generation and Reconstruction,” arXiv preprint arXiv:2304.06714, 2023.
  82. M. Xu, F. Zhan, J. Zhang, Y. Yu, X. Zhang, C. Theobalt, L. Shao, and S. Lu, “Wavenerf: Wavelet-based generalizable neural radiance fields,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023, pp. 18 195–18 204.
  83. S. Seo, D. Han, Y. Chang, and N. Kwak, “MixNeRF: Modeling a Ray with Mixture Density for Novel View Synthesis from Sparse Inputs,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 20 659–20 668.
  84. J. Yang, M. Pavone, and Y. Wang, “FreeNeRF: Improving Few-shot Neural Rendering with Free Frequency Regularization,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 8254–8263.
  85. J. T. Barron, B. Mildenhall, D. Verbin, P. P. Srinivasan, and P. Hedman, “Zip-NeRF: Anti-aliased grid-based neural radiance fields,” arXiv preprint arXiv:2304.06706, 2023.
  86. S. Rojas, J. Zarzar, J. C. Pérez, A. Sanakoyeu, A. Thabet, A. Pumarola, and B. Ghanem, “Re-rend: Real-time rendering of nerfs across devices,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023, pp. 3632–3641.
  87. G.-W. Yang, W.-Y. Zhou, H.-Y. Peng, D. Liang, T.-J. Mu, and S.-M. Hu, “Recursive-nerf: An efficient and dynamically growing nerf,” IEEE Transactions on Visualization and Computer Graphics, 2022.
  88. P. Wang, Y. Liu, Z. Chen, L. Liu, Z. Liu, T. Komura, C. Theobalt, and W. Wang, “F2-NeRF: Fast Neural Radiance Field Training with Free Camera Trajectories,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 4150–4159.
  89. R. Li, H. Gao, M. Tancik, and A. Kanazawa, “Nerfacc: Efficient sampling accelerates nerfs,” arXiv preprint arXiv:2305.04966, 2023.
  90. F. Rivas-Manzaneque, J. Sierra-Acosta, A. Penate-Sanchez, F. Moreno-Noguer, and A. Ribeiro, “Nerflight: Fast and light neural radiance fields using a shared feature grid,” in 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).   IEEE, 2023, pp. 12 417–12 427.
  91. A. Ramazzina, M. Bijelic, S. Walz, A. Sanvito, D. Scheuble, and F. Heide, “ScatterNeRF: Seeing Through Fog with Physically-Based Inverse Neural Rendering,” arXiv preprint arXiv:2305.02103, 2023.
  92. Y. Zhan, S. Nobuhara, K. Nishino, and Y. Zheng, “Nerfrac: Neural radiance fields through refractive surface,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023, pp. 18 402–18 412.
  93. P. Li, S. Wang, C. Yang, B. Liu, W. Qiu, and H. Wang, “NeRF-MS: Neural Radiance Fields with Multi-Sequence,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023, pp. 18 591–18 600.
  94. J. Wynn and D. Turmukhambetov, “Diffusionerf: Regularizing neural radiance fields with denoising diffusion models,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 4180–4189.
  95. Z. J. Tang, T.-J. Cham, and H. Zhao, “ABLE-NeRF: Attention-Based Rendering with Learnable Embeddings for Neural Radiance Field,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 16 559–16 568.
  96. A.-Q. Cao and R. de Charette, “Scenerf: Self-supervised monocular 3d scene reconstruction with radiance fields,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023, pp. 9387–9398.
  97. T. Kaneko, “MIMO-NeRF: Fast Neural Rendering with Multi-input Multi-output Neural Radiance Fields,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023, pp. 3273–3283.
  98. R. Agaram, S. Dewan, R. Sajnani, A. Poulenard, M. Krishna, and S. Sridhar, “Canonical Fields: Self-Supervised Learning of Pose-Canonicalized Neural Fields,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 4500–4510.
  99. S. Yang, X. Cui, Y. Zhu, J. Tang, S. Li, Z. Yu, and B. Shi, “Complementary Intrinsics From Neural Radiance Fields and CNNs for Outdoor Scene Relighting,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 16 600–16 609.
  100. J. Xu, L. Peng, H. Cheng, H. Li, W. Qian, K. Li, W. Wang, and D. Cai, “Mononerd: Nerf-like representations for monocular 3d object detection,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023, pp. 6814–6824.
  101. C. Deng, C. Jiang, C. R. Qi, X. Yan, Y. Zhou, L. Guibas, and D. Anguelov, “Nerdi: Single-view nerf synthesis with language-guided diffusion as general image priors,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 20 637–20 647.
  102. F. Tian, S. Du, and Y. Duan, “Mononerf: Learning a generalizable dynamic radiance field from monocular videos,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023, pp. 17 903–17 913.
  103. J.-W. Liu, Y.-P. Cao, T. Yang, Z. Xu, J. Keppo, Y. Shan, X. Qie, and M. Z. Shou, “Hosnerf: Dynamic human-object-scene neural radiance fields from a single video,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023, pp. 18 483–18 494.
  104. Z. Yan, C. Li, and G. H. Lee, “Nerf-ds: Neural radiance fields for dynamic specular objects,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 8285–8295.
  105. J. Zhang, F. Zhan, Y. Yu, K. Liu, R. Wu, X. Zhang, L. Shao, and S. Lu, “Pose-Free Neural Radiance Fields via Implicit Pose Regularization,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023, pp. 3534–3543.
  106. Y. Chen and G. H. Lee, “DBARF: Deep Bundle-Adjusting Generalizable Neural Radiance Fields,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 24–34.
  107. Z. Cheng, C. Esteves, V. Jampani, A. Kar, S. Maji, and A. Makadia, “LU-NeRF: Scene and Pose Estimation by Synchronizing Local Unposed NeRFs,” arXiv preprint arXiv:2306.05410, 2023.
  108. W. Bian, Z. Wang, K. Li, J.-W. Bian, and V. A. Prisacariu, “Nope-nerf: Optimising neural radiance field with no pose prior,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 4160–4169.
  109. C. Xu, B. Wu, J. Hou, S. Tsai, R. Li, J. Wang, W. Zhan, Z. He, P. Vajda, and K. Keutzer, “Nerf-det: Learning geometry-aware volumetric representation for multi-view 3d object detection,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023, pp. 23 320–23 330.
  110. W. Ye, S. Chen, C. Bao, H. Bao, M. Pollefeys, Z. Cui, and G. Zhang, “Intrinsicnerf: Learning intrinsic neural radiance fields for editable novel view synthesis,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023, pp. 339–351.
  111. C. Choi, S. M. Kim, and Y. M. Kim, “Balanced Spherical Grid for Egocentric View Synthesis,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 16 590–16 599.
  112. Y. Qi, L. Zhu, Y. Zhang, and J. Li, “E2NeRF: Event Enhanced Neural Radiance Fields from Blurry Images,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023, pp. 13 254–13 264.
  113. J. Tang, H. Zhou, X. Chen, T. Hu, E. Ding, J. Wang, and G. Zeng, “Delicate textured mesh recovery from nerf via adaptive surface refinement,” arXiv preprint arXiv:2303.02091, 2023.
  114. Y.-L. Qiao, A. Gao, Y. Xu, Y. Feng, J.-B. Huang, and M. C. Lin, “Dynamic mesh-aware radiance fields,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023, pp. 385–396.
  115. A. Haque, M. Tancik, A. A. Efros, A. Holynski, and A. Kanazawa, “Instruct-nerf2nerf: Editing 3d scenes with instructions,” arXiv preprint arXiv:2303.12789, 2023.
  116. J. Qiu, Z.-X. Yin, M.-M. Cheng, and B. Ren, “NeRC: Rendering Planar Caustics by Learning Implicit Neural Representations,” IEEE Transactions on Visualization and Computer Graphics, 2023.
  117. Y. Jiang, P. Hedman, B. Mildenhall, D. Xu, J. T. Barron, Z. Wang, and T. Xue, “Alignerf: High-fidelity neural radiance fields via alignment-aware training,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 46–55.
  118. K. Park, P. Henzler, B. Mildenhall, J. T. Barron, and R. Martin-Brualla, “Camp: Camera preconditioning for neural radiance fields,” ACM Transactions on Graphics (TOG), vol. 42, no. 6, pp. 1–11, 2023.
  119. B. K. Isaac-Medina, C. G. Willcocks, and T. P. Breckon, “Exact-nerf: An exploration of a precise volumetric parameterization for neural radiance fields,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 66–75.
  120. H. Yang, L. Hong, A. Li, T. Hu, Z. Li, G. H. Lee, and L. Wang, “Contranerf: Generalizable neural radiance fields for synthetic-to-real novel view synthesis via contrastive learning,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 16 508–16 517.
  121. T. Müller, A. Evans, C. Schied, and A. Keller, “Instant neural graphics primitives with a multiresolution hash encoding,” ACM transactions on graphics (TOG), vol. 41, no. 4, pp. 1–15, 2022.
  122. S. Fridovich-Keil, A. Yu, M. Tancik, Q. Chen, B. Recht, and A. Kanazawa, “Plenoxels: Radiance fields without neural networks,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 5501–5510.
  123. C. Sun, M. Sun, and H.-T. Chen, “Direct voxel grid optimization: Super-fast convergence for radiance fields reconstruction,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 5459–5469.
  124. A. Chen, Z. Xu, A. Geiger, J. Yu, and H. Su, “Tensorf: Tensorial radiance fields,” in European Conference on Computer Vision.   Springer, 2022, pp. 333–350.
  125. D. Xu, Y. Jiang, P. Wang, Z. Fan, Y. Wang, and Z. Wang, “NeuralLift-360: Lifting an In-the-Wild 2D Photo to a 3D Object With 360deg Views,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 4479–4489.
  126. S. Hu, F. Hong, L. Pan, H. Mei, L. Yang, and Z. Liu, “SHERF: Generalizable Human NeRF from a Single Image,” arXiv preprint arXiv:2303.12791, 2023.
  127. M. Bortolon, A. Del Bue, and F. Poiesi, “VM-NeRF: Tackling Sparsity in NeRF with View Morphing,” arXiv preprint arXiv:2210.04214, 2022.
  128. N. Somraj and R. Soundararajan, “ViP-NeRF: Visibility Prior for Sparse Input Neural Radiance Fields,” arXiv preprint arXiv:2305.00041, 2023.
  129. X. Xie, R. Gherardi, Z. Pan, and S. Huang, “Hollownerf: Pruning hashgrid-based nerfs with trainable collision mitigation,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023, pp. 3480–3490.
  130. J. Zhu, H. Zhu, Q. Zhang, F. Zhu, Z. Ma, and X. Cao, “Pyramid NeRF: Frequency Guided Fast Radiance Field Optimization,” International Journal of Computer Vision, pp. 1–16, 2023.
  131. S. Li, H. Li, Y. Wang, Y. Liao, and L. Yu, “SteerNeRF: Accelerating NeRF Rendering via Smooth Viewpoint Trajectory,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 20 701–20 711.
  132. Z. Wan, C. Richardt, A. Božič, C. Li, V. Rengarajan, S. Nam, X. Xiang, T. Li, B. Zhu, and R. Ranjan, “Learning Neural Duplex Radiance Fields for Real-Time View Synthesis,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 8307–8316.
  133. Z. Chen, T. Funkhouser, P. Hedman, and A. Tagliasacchi, “Mobilenerf: Exploiting the polygon rasterization pipeline for efficient neural field rendering on mobile architectures,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 16 569–16 578.
  134. D. Liao and W. Ai, “VI-NeRF-SLAM: A real-time visual–inertial SLAM with NeRF mapping,” Journal of Real-Time Image Processing, vol. 21, no. 2, pp. 1–15, 2024.
  135. P. Phongthawee, S. Wizadwongsa, J. Yenphraphai, and S. Suwajanakorn, “NeX360: Real-Time All-Around View Synthesis With Neural Basis Expansion,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 45, no. 6, pp. 7611–7624, 2022.
  136. P. Nguyen-Ha, L. Huynh, E. Rahtu, J. Matas, and J. Heikkil, “Cascaded and Generalizable Neural Radiance Fields for Fast View Synthesis,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 2023.
  137. H. Bai, Y. Lin, Y. Chen, and L. Wang, “Dynamic plenoctree for adaptive sampling refinement in explicit nerf,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023, pp. 8785–8795.
  138. C. Reiser, S. Peng, Y. Liao, and A. Geiger, “Kilonerf: Speeding up neural radiance fields with thousands of tiny mlps,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 14 335–14 345.
  139. P. Hedman, P. P. Srinivasan, B. Mildenhall, J. T. Barron, and P. Debevec, “Baking neural radiance fields for real-time view synthesis,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 5875–5884.
  140. D. Lee, M. Lee, C. Shin, and S. Lee, “DP-NeRF: Deblurred Neural Radiance Field With Physical Scene Priors,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 12 386–12 396.
  141. F. Warburg, E. Weber, M. Tancik, A. Holynski, and A. Kanazawa, “Nerfbusters: Removing Ghostly Artifacts from Casually Captured NeRFs,” arXiv preprint arXiv:2304.10532, 2023.
  142. A. Dhiman, R. Srinath, H. Rangwani, R. Parihar, L. R. Boregowda, S. Sridhar, and R. V. Babu, “Strata-NeRF: Neural Radiance Fields for Stratified Scenes,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023, pp. 17 603–17 614.
  143. S. Sabour, S. Vora, D. Duckworth, I. Krasin, D. J. Fleet, and A. Tagliasacchi, “RobustNeRF: Ignoring Distractors with Robust Losses,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 20 626–20 636.
  144. Z.-X. Yin, J. Qiu, M.-M. Cheng, and B. Ren, “Multi-Space Neural Radiance Fields,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 12 407–12 416.
  145. D. Levy, A. Peleg, N. Pearl, D. Rosenbaum, D. Akkaynak, S. Korman, and T. Treibitz, “SeaThru-NeRF: Neural Radiance Fields in Scattering Media,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 56–65.
  146. X. Wu, J. Xu, X. Zhang, H. Bao, Q. Huang, Y. Shen, J. Tompkin, and W. Xu, “ScaNeRF: Scalable Bundle-Adjusting Neural Radiance Fields for Large-Scale Scene Rendering,” ACM Transactions on Graphics (TOG), vol. 42, no. 6, pp. 1–18, 2023.
  147. B. Zhu, Y. Yang, X. Wang, Y. Zheng, and L. Guibas, “Vdn-nerf: Resolving shape-radiance ambiguity via view-dependence normalization,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 35–45.
  148. M. Gableman and A. Kak, “Incorporating season and solar specificity into renderings made by a NeRF architecture using satellite images,” arXiv preprint arXiv:2308.01262, 2023.
  149. V. Rudnev, M. Elgharib, C. Theobalt, and V. Golyanik, “EventNeRF: Neural radiance fields from a single colour event camera,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 4992–5002.
  150. C.-H. Lin, W.-C. Ma, A. Torralba, and S. Lucey, “Barf: Bundle-adjusting neural radiance fields,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 5741–5751.
  151. Y. Gao, Y.-P. Cao, and Y. Shan, “SurfelNeRF: Neural Surfel Radiance Fields for Online Photorealistic Reconstruction of Indoor Scenes,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 108–118.
  152. Y. Wei, S. Liu, J. Zhou, and J. Lu, “Depth-Guided Optimization of Neural Radiance Fields for Indoor Multi-View Stereo,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 2023.
  153. J. Li, J. Zhang, X. Bai, J. Zhou, and L. Gu, “Efficient region-aware neural radiance fields for high-fidelity talking portrait synthesis,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023, pp. 7568–7578.
  154. J. Mu, S. Sang, N. Vasconcelos, and X. Wang, “ActorsNeRF: Animatable Few-shot Human Rendering with Generalizable NeRFs,” arXiv preprint arXiv:2304.14401, 2023.
  155. S. Chang, G. Kim, and H. Kim, “HairNeRF: Geometry-Aware Image Synthesis for Hairstyle Transfer,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023, pp. 2448–2458.
  156. S. Hwang, J. Hyung, D. Kim, M.-J. Kim, and J. Choo, “Faceclipnerf: Text-driven 3d face manipulation using deformable neural radiance fields,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023, pp. 3469–3479.
  157. Y. Yin, K. Ghasedi, H. Wu, J. Yang, X. Tong, and Y. Fu, “NeRFInvertor: High Fidelity NeRF-GAN Inversion for Single-shot Real Image Animation,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 8539–8548.
  158. X. Pan, Z. Yang, J. Ma, C. Zhou, and Y. Yang, “Transhuman: A transformer-based human representation for generalizable neural human rendering,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023, pp. 3544–3555.
  159. J. Chen, W. Yi, L. Ma, X. Jia, and H. Lu, “GM-NeRF: Learning Generalizable Model-based Neural Radiance Fields from Multi-view Images,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 20 648–20 658.
  160. K. Wang, S. Peng, X. Zhou, J. Yang, and G. Zhang, “Nerfcap: Human performance capture with dynamic neural radiance fields,” IEEE Transactions on Visualization and Computer Graphics, 2022.
  161. C.-Y. Weng, P. P. Srinivasan, B. Curless, and I. Kemelmacher-Shlizerman, “PersonNeRF: Personalized Reconstruction from Photo Collections,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 524–533.
  162. Z. Zheng, X. Zhao, H. Zhang, B. Liu, and Y. Liu, “AvatarReX: Real-time Expressive Full-body Avatars,” arXiv preprint arXiv:2305.04789, 2023.
  163. C.-H. Lin, J. Gao, L. Tang, T. Takikawa, X. Zeng, X. Huang, K. Kreis, S. Fidler, M.-Y. Liu, and T.-Y. Lin, “Magic3d: High-resolution text-to-3d content creation,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 300–309.
  164. A. Mikaeili, O. Perel, M. Safaee, D. Cohen-Or, and A. Mahdavi-Amiri, “Sked: Sketch-guided text-based 3d editing,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023, pp. 14 607–14 619.
  165. A. Haque, M. Tancik, A. A. Efros, A. Holynski, and A. Kanazawa, “Instruct-nerf2nerf: Editing 3d scenes with instructions,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023, pp. 19 740–19 750.
  166. B. Xie, B. Li, Z. Zhang, J. Dong, X. Jin, J. Yang, and W. Zeng, “NaviNeRF: NeRF-based 3D Representation Disentanglement by Latent Semantic Navigation,” arXiv preprint arXiv:2304.11342, 2023.
  167. A. Mirzaei, T. Aumentado-Armstrong, K. G. Derpanis, J. Kelly, M. A. Brubaker, I. Gilitschenski, and A. Levinshtein, “SPIn-NeRF: Multiview segmentation and perceptual inpainting with neural radiance fields,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 20 669–20 679.
  168. J.-H. Lee and D.-S. Kim, “Ice-nerf: Interactive color editing of nerfs via decomposition-aware weight optimization,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023, pp. 3491–3501.
  169. Y. Liu, B. Hu, J. Huang, Y.-W. Tai, and C.-K. Tang, “Instance neural radiance field,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023, pp. 787–796.
  170. X. Zhang, A. Kundu, T. Funkhouser, L. Guibas, H. Su, and K. Genova, “Nerflets: Local radiance fields for efficient structure-aware 3d scene representation from 2d supervision,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 8274–8284.
  171. J. Ye, N. Wang, and X. Wang, “FeatureNeRF: Learning Generalizable NeRFs by Distilling Foundation Models,” arXiv preprint arXiv:2303.12786, 2023.
  172. B. Hu, J. Huang, Y. Liu, Y.-W. Tai, and C.-K. Tang, “NeRF-RPN: A general framework for object detection in NeRFs,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 23 528–23 538.
  173. C. Li, B. Y. Feng, Z. Fan, P. Pan, and Z. Wang, “Steganerf: Embedding invisible information within neural radiance fields,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023, pp. 441–453.
  174. Z. Chen, L. Yang, J.-H. Lai, and X. Xie, “CuNeRF: Cube-based neural radiance field for zero-shot medical image arbitrary-scale super resolution,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023, pp. 21 185–21 195.
  175. X. Wu, P. Dai, W. Deng, H. Chen, Y. Wu, Y.-P. Cao, Y. Shan, and X. Qi, “CL-NeRF: Continual Learning of Neural Radiance Fields for Evolving Scene Representation,” Advances in Neural Information Processing Systems, vol. 36, 2024.
  176. Y. Xu, L. Wang, X. Zhao, H. Zhang, and Y. Liu, “Avatarmav: Fast 3d head avatar reconstruction using motion-aware neural voxels,” in ACM SIGGRAPH 2023 Conference Proceedings, 2023, pp. 1–10.
  177. W. Li, L. Zhang, D. Wang, B. Zhao, Z. Wang, M. Chen, B. Zhang, Z. Wang, L. Bo, and X. Li, “One-Shot High-Fidelity Talking-Head Synthesis with Deformable Neural Radiance Field,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 17 969–17 978.
  178. Y. Bai, Y. Fan, X. Wang, Y. Zhang, J. Sun, C. Yuan, and Y. Shan, “High-fidelity Facial Avatar Reconstruction from Monocular Video with Generative Priors,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 4541–4551.
  179. Y.-J. Yuan, X. Han, Y. He, F.-L. Zhang, and L. Gao, “MuNeRF: Robust Makeup Transfer in Neural Radiance Fields,” IEEE Transactions on Visualization and Computer Graphics, 2024.
  180. W. Zhou, L. Yuan, S. Chen, L. Gao, and S. Hu, “LC-NeRF: Local Controllable Face Generation in Neural Randiance Field,” arXiv preprint arXiv:2302.09486, 2023.
  181. M. Mendiratta, X. Pan, M. Elgharib, K. Teotia, A. Tewari, V. Golyanik, A. Kortylewski, and C. Theobalt, “Avatarstudio: Text-driven editing of 3d dynamic human head avatars,” ACM Transactions on Graphics (TOG), vol. 42, no. 6, pp. 1–18, 2023.
  182. C. Wang, R. Jiang, M. Chai, M. He, D. Chen, and J. Liao, “Nerf-art: Text-driven neural radiance fields stylization,” IEEE Transactions on Visualization and Computer Graphics, 2023.
  183. A. Ruzzi, X. Shi, X. Wang, G. Li, S. De Mello, H. J. Chang, X. Zhang, and O. Hilliges, “GazeNeRF: 3D-Aware Gaze Redirection with Neural Radiance Fields,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 9676–9685.
  184. T. Wang, B. Zhang, T. Zhang, S. Gu, J. Bao, T. Baltrusaitis, J. Shen, D. Chen, F. Wen, and Q. Chen, “Rodin: A generative model for sculpting 3d digital avatars using diffusion,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 4563–4573.
  185. K. Sun, S. Wu, Z. Huang, N. Zhang, Q. Wang, and H. Li, “Controllable 3d face synthesis with conditional generative occupancy fields,” Advances in Neural Information Processing Systems, vol. 35, pp. 16 331–16 343, 2022.
  186. Y. Xu, H. Zhang, L. Wang, X. Zhao, H. Huang, G. Qi, and Y. Liu, “LatentAvatar: Learning Latent Expression Code for Expressive Neural Head Avatar,” arXiv preprint arXiv:2305.01190, 2023.
  187. W. Yu, Y. Fan, Y. Zhang, X. Wang, F. Yin, Y. Bai, Y.-P. Cao, Y. Shan, Y. Wu, and Z. Sun, “Nofa: Nerf-based one-shot facial avatar reconstruction,” in ACM SIGGRAPH 2023 Conference Proceedings, 2023, pp. 1–12.
  188. P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros, “Image-to-image translation with conditional adversarial networks,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 1125–1134.
  189. G. Lin, L. Feng-Lin, C. Shu-Yu, J. Kaiwen, L. Chunpeng, Y. Lai, and F. Hongbo, “SketchFaceNeRF: Sketch-based facial generation and editing in neural radiance fields,” ACM Transactions on Graphics, 2023.
  190. T. Xiang, A. Sun, J. Wu, E. Adeli, and L. Fei-Fei, “Rendering humans from object-occluded monocular videos,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023, pp. 3239–3250.
  191. K. Wang, G. Zhang, S. Cong, and J. Yang, “Clothed Human Performance Capture With a Double-Layer Neural Radiance Fields,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 21 098–21 107.
  192. Z. Dong, K. Xu, Y. Gao, Q. Sun, H. Bao, W. Xu, and R. W. Lau, “SAILOR: Synergizing Radiance and Occupancy Fields for Live Human Performance Capture,” ACM Transactions on Graphics (TOG), vol. 42, no. 6, pp. 1–15, 2023.
  193. B. Peng, J. Hu, J. Zhou, X. Gao, and J. Zhang, “IntrinsicNGP: Intrinsic Coordinate based Hash Encoding for Human NeRF,” arXiv preprint arXiv:2302.14683, 2023.
  194. H. Zhang, F. Li, J. Zhao, C. Tan, D. Shen, Y. Liu, and T. Yu, “Controllable free viewpoint video reconstruction based on neural radiance fields and motion graphs,” IEEE Transactions on Visualization and Computer Graphics, 2022.
  195. Z. Guo, W. Zhou, M. Wang, L. Li, and H. Li, “HandNeRF: Neural Radiance Fields for Animatable Interacting Hands,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 21 078–21 087.
  196. C.-Y. Weng, B. Curless, P. P. Srinivasan, J. T. Barron, and I. Kemelmacher-Shlizerman, “Humannerf: Free-viewpoint rendering of moving people from monocular video,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 16 210–16 220.
  197. G. Metzer, E. Richardson, O. Patashnik, R. Giryes, and D. Cohen-Or, “Latent-nerf for shape-guided generation of 3d shapes and textures,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 12 663–12 673.
  198. J. Zhang, X. Li, Z. Wan, C. Wang, and J. Liao, “Text2nerf: Text-driven 3d scene generation with neural radiance fields,” IEEE Transactions on Visualization and Computer Graphics, 2024.
  199. J. Kerr, C. M. Kim, K. Goldberg, A. Kanazawa, and M. Tancik, “Lerf: Language embedded radiance fields,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023, pp. 19 729–19 739.
  200. J. Hyung, S. Hwang, D. Kim, H. Lee, and J. Choo, “Local 3D Editing via 3D Distillation of CLIP Knowledge,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 12 674–12 684.
  201. X. Wang, J. Zhu, Q. Ye, Y. Huo, Y. Ran, Z. Zhong, and J. Chen, “Seal-3d: Interactive pixel-level editing for neural radiance fields,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023, pp. 17 683–17 693.
  202. Z. Kuang, F. Luan, S. Bi, Z. Shu, G. Wetzstein, and K. Sunkavalli, “Palettenerf: Palette-based appearance editing of neural radiance fields,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 20 691–20 700.
  203. Y. Li, Z.-H. Lin, D. Forsyth, J.-B. Huang, and S. Wang, “Climatenerf: Extreme weather synthesis in neural radiance field,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023, pp. 3227–3238.
  204. S. Weder, G. Garcia-Hernando, A. Monszpart, M. Pollefeys, G. J. Brostow, M. Firman, and S. Vicente, “Removing objects from neural radiance fields,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 16 528–16 538.
  205. H. Zhong, J. Zhang, and J. Liao, “VQ-NeRF: Neural Reflectance Decomposition and Editing with Vector Quantization,” IEEE Transactions on Visualization and Computer Graphics, 2023.
  206. Y.-J. Yuan, Y.-T. Sun, Y.-K. Lai, Y. Ma, R. Jia, L. Kobbelt, and L. Gao, “Interactive nerf geometry editing with shape priors,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 2023.
  207. B. Poole, A. Jain, J. T. Barron, and B. Mildenhall, “Dreamfusion: Text-to-3d using 2d diffusion,” arXiv preprint arXiv:2209.14988, 2022.
  208. T. Brooks, A. Holynski, and A. A. Efros, “Instructpix2pix: Learning to follow image editing instructions,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 18 392–18 402.
  209. Z. Luo, Q. Guo, K. C. Cheung, S. See, and R. Wan, “Copyrnerf: Protecting the copyright of neural radiance fields,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023, pp. 22 401–22 411.
  210. M. M. Johari, Y. Lepoittevin, and F. Fleuret, “Geonerf: Generalizing nerf with geometry priors,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 18 365–18 375.
  211. A. Yu, V. Ye, M. Tancik, and A. Kanazawa, “Pixelnerf: Neural radiance fields from one or few images,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 4578–4587.
Citations (5)

Summary

We haven't generated a summary for this paper yet.