Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
126 tokens/sec
GPT-4o
47 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

3D Shape Completion on Unseen Categories:A Weakly-supervised Approach (2401.10578v2)

Published 19 Jan 2024 in cs.CV

Abstract: 3D shapes captured by scanning devices are often incomplete due to occlusion. 3D shape completion methods have been explored to tackle this limitation. However, most of these methods are only trained and tested on a subset of categories, resulting in poor generalization to unseen categories. In this paper, we introduce a novel weakly-supervised framework to reconstruct the complete shapes from unseen categories. We first propose an end-to-end prior-assisted shape learning network that leverages data from the seen categories to infer a coarse shape. Specifically, we construct a prior bank consisting of representative shapes from the seen categories. Then, we design a multi-scale pattern correlation module for learning the complete shape of the input by analyzing the correlation between local patterns within the input and the priors at various scales. In addition, we propose a self-supervised shape refinement model to further refine the coarse shape. Considering the shape variability of 3D objects across categories, we construct a category-specific prior bank to facilitate shape refinement. Then, we devise a voxel-based partial matching loss and leverage the partial scans to drive the refinement process. Extensive experimental results show that our approach is superior to state-of-the-art methods by a large margin.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (55)
  1. Q. Liu, H. Su, Z. Duanmu, W. Liu, and Z. Wang, “Perceptual quality assessment of colored 3d point clouds,” IEEE Trans. Vis. Comput. Graph., 2022.
  2. Y. Cui, R. Chen, W. Chu, L. Chen, D. Tian, Y. Li, and D. Cao, “Deep learning for image and point cloud fusion in autonomous driving: A review,” IEEE Trans. Intell. Transp. Syst., vol. 23, no. 2, pp. 722–739, 2021.
  3. Z. Zhu, S. Peng, V. Larsson, W. Xu, H. Bao, Z. Cui, M. R. Oswald, and M. Pollefeys, “Nice-slam: Neural implicit scalable encoding for slam,” in Proc. IEEE Conf. Comput. Vis. Pattern Recog., 2022, pp. 12 786–12 796.
  4. S. Ye, D. Chen, S. Han, Z. Wan, and J. Liao, “Meta-pu: An arbitrary-scale upsampling network for point cloud,” IEEE Trans. Vis. Comput. Graph., vol. 28, no. 9, pp. 3206–3218, 2021.
  5. N. Lei, Z. Li, Z. Xu, Y. Li, and X. Gu, “What’s the situation with intelligent mesh generation: A survey and perspectives,” IEEE Trans. Vis. Comput. Graph., 2023.
  6. F. G. Lohesara, D. R. Freitas, C. Guillemot, K. Eguiazarian, and S. Knorr, “Headset: Human emotion awareness under partial occlusions multimodal dataset,” IEEE Trans. Vis. Comput. Graph., 2023.
  7. H. Tian, C. Zhu, Y. Shi, and K. Xu, “Superudf: Self-supervised udf estimation for surface reconstruction,” IEEE Trans. Vis. Comput. Graph., 2023.
  8. P. Mittal, Y.-C. Cheng, M. Singh, and S. Tulsiani, “Autosdf: Shape priors for 3d completion, reconstruction and generation,” in Proc. IEEE Conf. Comput. Vis. Pattern Recog., 2022, pp. 306–315.
  9. Y. Rao, Y. Nie, and A. Dai, “Patchcomplete: Learning multi-resolution patch priors for 3d shape completion on unseen categories,” Proc. Adv. Neural Inf. Process. Syst., vol. 35, pp. 34 436–34 450, 2022.
  10. X. Yan, L. Lin, N. J. Mitra, D. Lischinski, D. Cohen-Or, and H. Huang, “Shapeformer: Transformer-based shape completion via sparse representation,” in Proc. IEEE Conf. Comput. Vis. Pattern Recog., 2022, pp. 6239–6249.
  11. M. Xu, Y. Wang, Y. Liu, T. He, and Y. Qiao, “Cp3: Unifying point cloud completion by pretrain-prompt-predict paradigm,” IEEE Trans. Pattern Anal. Mach. Intell., 2023.
  12. X. Yu, Y. Rao, Z. Wang, J. Lu, and J. Zhou, “Adapointr: Diverse point cloud completion with adaptive geometry-aware transformers,” arXiv preprint arXiv:2301.04545, 2023.
  13. M. Wang, Y.-S. Liu, Y. Gao, K. Shi, Y. Fang, and Z. Han, “Lp-dif: Learning local pattern-specific deep implicit function for 3d objects and scenes,” in Proc. IEEE Conf. Comput. Vis. Pattern Recog., 2023, pp. 21 856–21 865.
  14. Y. Wu, Z. Yan, C. Chen, L. Wei, X. Li, G. Li, Y. Li, S. Cui, and X. Han, “Scoda: Domain adaptive shape completion for real scans,” in Proc. IEEE Conf. Comput. Vis. Pattern Recog., 2023, pp. 17 630–17 641.
  15. C. Ma, Y. Yang, J. Guo, M. Wei, C. Wang, Y. Guo, and W. Wang, “Collaborative completion and segmentation for partial point clouds with outliers,” IEEE Transactions on Visualization & Computer Graphics, no. 01, pp. 1–13, 2023.
  16. Z. Zhu, L. Nan, H. Xie, H. Chen, J. Wang, M. Wei, and J. Qin, “Csdn: Cross-modal shape-transfer dual-refinement network for point cloud completion,” IEEE Trans. Vis. Comput. Graph., 2023.
  17. X. Xu, P. Guerrero, M. Fisher, S. Chaudhuri, and D. Ritchie, “Unsupervised 3d shape reconstruction by part retrieval and assembly,” in Proc. IEEE Conf. Comput. Vis. Pattern Recog., 2023, pp. 8559–8567.
  18. X. Li, S. Liu, K. Kim, S. De Mello, V. Jampani, M.-H. Yang, and J. Kautz, “Self-supervised single-view 3d reconstruction via semantic consistency,” in Proc. European Conf. Comput. Vis., 2020, pp. 677–693.
  19. S. Hong, M. Yavartanoo, R. Neshatavar, and K. M. Lee, “Acl-spc: Adaptive closed-loop system for self-supervised point cloud completion,” in Proc. IEEE Conf. Comput. Vis. Pattern Recog., 2023, pp. 9435–9444.
  20. X. Wen, Z. Han, Y.-P. Cao, P. Wan, W. Zheng, and Y.-S. Liu, “Cycle4completion: Unpaired point cloud completion using cycle transformation with missing region coding,” in Proc. IEEE Conf. Comput. Vis. Pattern Recog., 2021, pp. 13 080–13 089.
  21. J. Zhang, X. Chen, Z. Cai, L. Pan, H. Zhao, S. Yi, C. K. Yeo, B. Dai, and C. C. Loy, “Unsupervised 3d shape completion through gan inversion,” in Proc. IEEE Conf. Comput. Vis. Pattern Recog., 2021, pp. 1768–1777.
  22. Y. Cai, K.-Y. Lin, C. Zhang, Q. Wang, X. Wang, and H. Li, “Learning a structured latent space for unsupervised point cloud completion,” in Proc. IEEE Conf. Comput. Vis. Pattern Recog., 2022, pp. 5543–5553.
  23. H. Mittal, B. Okorn, A. Jangid, and D. Held, “Self-supervised point cloud completion via inpainting,” in Proc. British Mach. Vis. Conf., 2021.
  24. X. Wang, M. H. Ang, and G. H. Lee, “Cascaded refinement network for point cloud completion with self-supervision,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 44, no. 11, pp. 8139–8150, 2021.
  25. J. Gu, W.-C. Ma, S. Manivasagam, W. Zeng, Z. Wang, Y. Xiong, H. Su, and R. Urtasun, “Weakly-supervised 3d shape completion in the wild,” in Proc. European Conf. Comput. Vis., 2020, pp. 283–299.
  26. B. Sun, V. G. Kim, N. Aigerman, Q. Huang, and S. Chaudhuri, “Patchrd: Detail-preserving shape completion by learning patch retrieval and deformation,” in Proc. European Conf. Comput. Vis., 2022, pp. 503–522.
  27. A. Dai, C. Ruizhongtai Qi, and M. Nießner, “Shape completion using 3d-encoder-predictor cnns and shape synthesis,” in Proc. IEEE Conf. Comput. Vis. Pattern Recog., 2017, pp. 5868–5877.
  28. M. Tatarchenko, A. Dosovitskiy, and T. Brox, “Octree generating networks: Efficient convolutional architectures for high-resolution 3d outputs,” in Proc. IEEE Int. Conf. Comput. Vis, 2017, pp. 2088–2096.
  29. J. Chibane, T. Alldieck, and G. Pons-Moll, “Implicit functions in feature space for 3d shape reconstruction and completion,” in Proc. IEEE Conf. Comput. Vis. Pattern Recog., 2020, pp. 6970–6981.
  30. B. Wallace and B. Hariharan, “Few-shot generalization for single-image 3d reconstruction via priors,” in Proc. IEEE Int. Conf. Comput. Vis, 2019, pp. 3818–3827.
  31. Y.-C. Cheng, H.-Y. Lee, S. Tulyakov, A. G. Schwing, and L.-Y. Gui, “Sdfusion: Multimodal 3d shape completion, reconstruction, and generation,” in Proc. IEEE Conf. Comput. Vis. Pattern Recog., 2023, pp. 4456–4465.
  32. W. Yuan, T. Khot, D. Held, C. Mertz, and M. Hebert, “Pcn: Point completion network,” in Int. Conf. 3D Vis., 2018, pp. 728–737.
  33. Y. Yang, C. Feng, Y. Shen, and D. Tian, “Foldingnet: Point cloud auto-encoder via deep grid deformation,” in Proc. IEEE Conf. Comput. Vis. Pattern Recog., 2018, pp. 206–215.
  34. L. P. Tchapmi, V. Kosaraju, H. Rezatofighi, I. Reid, and S. Savarese, “Topnet: Structural point cloud decoder,” in Proc. IEEE Conf. Comput. Vis. Pattern Recog., 2019, pp. 383–392.
  35. P. Xiang, X. Wen, Y.-S. Liu, Y.-P. Cao, P. Wan, W. Zheng, and Z. Han, “Snowflakenet: Point cloud completion by snowflake point deconvolution with skip-transformer,” in Proc. IEEE Int. Conf. Comput. Vis, 2021, pp. 5499–5509.
  36. H. Xie, H. Yao, S. Zhou, J. Mao, S. Zhang, and W. Sun, “Grnet: Gridding residual network for dense point cloud completion,” in Proc. European Conf. Comput. Vis., 2020, pp. 365–381.
  37. X. Yu, Y. Rao, Z. Wang, Z. Liu, J. Lu, and J. Zhou, “Pointr: Diverse point cloud completion with geometry-aware transformers,” in Proc. IEEE Int. Conf. Comput. Vis, 2021, pp. 12 498–12 507.
  38. X. Zhang, Y. Feng, S. Li, C. Zou, H. Wan, X. Zhao, Y. Guo, and Y. Gao, “View-guided point cloud completion,” in Proc. IEEE Conf. Comput. Vis. Pattern Recog., 2021, pp. 15 890–15 899.
  39. S. Li, P. Gao, X. Tan, and M. Wei, “Proxyformer: Proxy alignment assisted point cloud completion with missing part sensitive transformer,” in Proc. IEEE Conf. Comput. Vis. Pattern Recog., 2023, pp. 9466–9475.
  40. H. Zhou, Y. Cao, W. Chu, J. Zhu, T. Lu, Y. Tai, and C. Wang, “Seedformer: Patch seeds based point cloud completion with upsample transformer,” in Proc. European Conf. Comput. Vis., 2022, pp. 416–432.
  41. J. Zhang, H. Zhang, R. Vasudevan, and M. Johnson-Roberson, “Hyperspherical embedding for point cloud completion,” in Proc. IEEE Conf. Comput. Vis. Pattern Recog., 2023, pp. 5323–5332.
  42. A. Dai and M. Nießner, “Scan2mesh: From unstructured range scans to 3d meshes,” in Proc. IEEE Conf. Comput. Vis. Pattern Recog., 2019, pp. 5574–5583.
  43. J. Tang, X. Han, J. Pan, K. Jia, and X. Tong, “A skeleton-bridged deep learning approach for generating meshes of complex topologies from single rgb images,” in Proc. IEEE Conf. Comput. Vis. Pattern Recog., 2019, pp. 4541–4550.
  44. J. Tang, X. Han, M. Tan, X. Tong, and K. Jia, “Skeletonnet: A topology-preserving solution for learning mesh reconstruction of object surfaces from rgb images,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 44, no. 10, pp. 6454–6471, 2021.
  45. R. Chabra, J. E. Lenssen, E. Ilg, T. Schmidt, J. Straub, S. Lovegrove, and R. Newcombe, “Deep local shapes: Learning local sdf priors for detailed 3d reconstruction,” in Proc. European Conf. Comput. Vis., 2020.
  46. J. J. Park, P. Florence, J. Straub, R. Newcombe, and S. Lovegrove, “Deepsdf: Learning continuous signed distance functions for shape representation,” in Proc. IEEE Conf. Comput. Vis. Pattern Recog., 2019, pp. 165–174.
  47. S. Peng, M. Niemeyer, L. Mescheder, M. Pollefeys, and A. Geiger, “Convolutional occupancy networks,” in Proc. European Conf. Comput. Vis., 2020, pp. 523–540.
  48. J. Yang, P. Ahn, D. Kim, H. Lee, and J. Kim, “Progressive seed generation auto-encoder for unsupervised point cloud learning,” in Proc. IEEE Int. Conf. Comput. Vis, 2021, pp. 6413–6422.
  49. J. Pang, D. Li, and D. Tian, “Tearingnet: Point cloud autoencoder to learn topology-friendly representations,” in Proc. IEEE Conf. Comput. Vis. Pattern Recog., 2021, pp. 7453–7462.
  50. X. Chen, B. Chen, and N. J. Mitra, “Unpaired point cloud completion on real scans using adversarial training,” in Proc. Int. Conf. Learn. Represent., 2020.
  51. I. Achituve, H. Maron, and G. Chechik, “Self-supervised learning for domain adaptation on point clouds,” in Proc. IEEE Winter Conf. Appl. Comput. Vis., 2021, pp. 123–133.
  52. A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin, “Attention is all you need,” Proc. Adv. Neural Inf. Process. Syst., vol. 30, 2017.
  53. A. X. Chang, T. Funkhouser, L. Guibas, P. Hanrahan, Q. Huang, Z. Li, S. Savarese, M. Savva, S. Song, H. Su et al., “Shapenet: An information-rich 3d model repository,” arXiv preprint arXiv:1512.03012, 2015.
  54. A. Dai, A. X. Chang, M. Savva, M. Halber, T. Funkhouser, and M. Nießner, “Scannet: Richly-annotated 3d reconstructions of indoor scenes,” in Proc. IEEE Conf. Comput. Vis. Pattern Recog., 2017, pp. 5828–5839.
  55. Z. Chen, F. Long, Z. Qiu, T. Yao, W. Zhou, J. Luo, and T. Mei, “Anchorformer: Point cloud completion from discriminative nodes,” in Proc. IEEE Conf. Comput. Vis. Pattern Recog., 2023, pp. 13 581–13 590.
Citations (1)

Summary

We haven't generated a summary for this paper yet.