EdgeNet: Semantic Scene Completion from a Single RGB-D Image (1908.02893v2)
Abstract: Semantic scene completion is the task of predicting a complete 3D representation of volumetric occupancy with corresponding semantic labels for a scene from a single point of view. Previous works on Semantic Scene Completion from RGB-D data used either only depth or depth with colour by projecting the 2D image into the 3D volume resulting in a sparse data representation. In this work, we present a new strategy to encode colour information in 3D space using edge detection and flipped truncated signed distance. We also present EdgeNet, a new end-to-end neural network architecture capable of handling features generated from the fusion of depth and edge information. Experimental results show improvement of 6.9% over the state-of-the-art result on real data, for end-to-end approaches.
- Aloisio Dourado (3 papers)
- Teofilo Emidio de Campos (5 papers)
- Hansung Kim (23 papers)
- Adrian Hilton (39 papers)