Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 168 tok/s
Gemini 2.5 Pro 49 tok/s Pro
GPT-5 Medium 37 tok/s Pro
GPT-5 High 34 tok/s Pro
GPT-4o 99 tok/s Pro
Kimi K2 214 tok/s Pro
GPT OSS 120B 466 tok/s Pro
Claude Sonnet 4.5 37 tok/s Pro
2000 character limit reached

CompleteDT: Point Cloud Completion with Dense Augment Inference Transformers (2205.14999v2)

Published 30 May 2022 in cs.CV

Abstract: Point cloud completion task aims to predict the missing part of incomplete point clouds and generate complete point clouds with details. In this paper, we propose a novel point cloud completion network, namely CompleteDT. Specifically, features are learned from point clouds with different resolutions, which is sampled from the incomplete input, and are converted to a series of \textit{spots} based on the geometrical structure. Then, the Dense Relation Augment Module (DRA) based on the transformer is proposed to learn features within \textit{spots} and consider the correlation among these \textit{spots}. The DRA consists of Point Local-Attention Module (PLA) and Point Dense Multi-Scale Attention Module (PDMA), where the PLA captures the local information within the local \textit{spots} by adaptively measuring weights of neighbors and the PDMA exploits the global relationship between these \textit{spots} in a multi-scale densely connected manner. Lastly, the complete shape is predicted from \textit{spots} by the Multi-resolution Point Fusion Module (MPF), which gradually generates complete point clouds from \textit{spots}, and updates \textit{spots} based on these generated point clouds. Experimental results show that, because the DRA based on the transformer can learn the expressive features from the incomplete input and the MPF can fully explore these feature to predict the complete input, our method largely outperforms the state-of-the-art methods.

Citations (3)

Summary

We haven't generated a summary for this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.