Emergent Mind

FastV2C-HandNet: Fast Voxel to Coordinate Hand Pose Estimation with 3D Convolutional Neural Networks

(1907.06327)
Published Jul 15, 2019 in cs.CV , cs.HC , cs.LG , and eess.IV

Abstract

Hand pose estimation from monocular depth images has been an important and challenging problem in the Computer Vision community. In this paper, we present a novel approach to estimate 3D hand joint locations from 2D depth images. Unlike most of the previous methods, our model captures the 3D spatial information from a depth image thereby giving it a greater understanding of the input. We voxelize the input depth map to capture the 3D features of the input and perform 3D data augmentations to make our network robust to real-world images. Our network is trained in an end-to-end manner which reduces time and space complexity significantly when compared to other methods. Through extensive experiments, we show that our model outperforms state-of-the-art methods with respect to the time it takes to train and predict 3D hand joint locations. This makes our method more suitable for real-world hand pose estimation scenarios.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.