Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Residual 3D Scene Flow Learning with Context-Aware Feature Extraction (2109.04685v2)

Published 10 Sep 2021 in cs.CV

Abstract: Scene flow estimation is the task to predict the point-wise or pixel-wise 3D displacement vector between two consecutive frames of point clouds or images, which has important application in fields such as service robots and autonomous driving. Although many previous works have explored greatly on scene flow estimation based on point clouds, there are two problems that have not been noticed or well solved before: 1) Points of adjacent frames in repetitive patterns may be wrongly associated due to similar spatial structure in their neighbourhoods; 2) Scene flow between adjacent frames of point clouds with long-distance movement may be inaccurately estimated. To solve the first problem, a novel context-aware set convolution layer is proposed in this paper to exploit contextual structure information of Euclidean space and learn soft aggregation weights for local point features. This design is inspired by human perception of contextual structure information during scene understanding with repetitive patterns. The context-aware set convolution layer is incorporated in a context-aware point feature pyramid module of 3D point clouds for scene flow estimation. For the second problem, an explicit residual flow learning structure is proposed in the residual flow refinement layer to cope with long-distance movement. The experiments and ablation study on FlyingThings3D and KITTI scene flow datasets demonstrate the effectiveness of each proposed component. The qualitative results show that the problems of ambiguous inter-frame association and long-distance movement estimation are well handled. Quantitative results on both FlyingThings3D and KITTI scene flow datasets show that the proposed method achieves state-of-the-art performance, surpassing all other previous works to the best of our knowledge by at least 25%.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Guangming Wang (57 papers)
  2. Yunzhe Hu (4 papers)
  3. Xinrui Wu (10 papers)
  4. Hesheng Wang (87 papers)
Citations (26)

Summary

We haven't generated a summary for this paper yet.