Emergent Mind

Relational Network for Skeleton-Based Action Recognition

(1805.02556)
Published May 7, 2018 in cs.CV

Abstract

With the fast development of effective and low-cost human skeleton capture systems, skeleton-based action recognition has attracted much attention recently. Most existing methods use Convolutional Neural Network (CNN) and Recurrent Neural Network (RNN) to extract spatio-temporal information embedded in the skeleton sequences for action recognition. However, these approaches are limited in the ability of relational modeling in a single skeleton, due to the loss of important structural information when converting the raw skeleton data to adapt to the input format of CNN or RNN. In this paper, we propose an Attentional Recurrent Relational Network-LSTM (ARRN-LSTM) to simultaneously model spatial configurations and temporal dynamics in skeletons for action recognition. We introduce the Recurrent Relational Network to learn the spatial features in a single skeleton, followed by a multi-layer LSTM to learn the temporal features in the skeleton sequences. Between the two modules, we design an adaptive attentional module to focus attention on the most discriminative parts in the single skeleton. To exploit the complementarity from different geometries in the skeleton for sufficient relational modeling, we design a two-stream architecture to learn the structural features among joints and lines simultaneously. Extensive experiments are conducted on several popular skeleton datasets and the results show that the proposed approach achieves better results than most mainstream methods.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.