Emergent Mind

Abstract

Skeleton data, which consists of only the 2D/3D coordinates of the human joints, has been widely studied for human action recognition. Existing methods take the semantics as prior knowledge to group human joints and draw correlations according to their spatial locations, which we call the semantic perspective for skeleton modeling. In this paper, in contrast to previous approaches, we propose to model skeletons from a novel spatial perspective, from which the model takes the spatial location as prior knowledge to group human joints and mines the discriminative patterns of local areas in a hierarchical manner. The two perspectives are orthogonal and complementary to each other; and by fusing them in a unified framework, our method achieves a more comprehensive understanding of the skeleton data. Besides, we customized two networks for the two perspectives. From the semantic perspective, we propose a Transformer-like network that is expert in modeling joint correlations, and present three effective techniques to adapt it for skeleton data. From the spatial perspective, we transform the skeleton data into the sparse format for efficient feature extraction and present two types of sparse convolutional networks for sparse skeleton modeling. Extensive experiments are conducted on three challenging datasets for skeleton-based human action/gesture recognition, namely, NTU-60, NTU-120 and SHREC, where our method achieves state-of-the-art performance.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.