Emergent Mind

Learning Representative Temporal Features for Action Recognition

(1802.06724)
Published Feb 19, 2018 in cs.CV

Abstract

In this paper, a novel video classification method is presented that aims to recognize different categories of third-person videos efficiently. Our motivation is to achieve a light model that could be trained with insufficient training data. With this intuition, the processing of the 3-dimensional video input is broken to 1D in temporal dimension on top of the 2D in spatial. The processes related to 2D spatial frames are being done by utilizing pre-trained networks with no training phase. The only step which involves training is to classify the 1D time series resulted from the description of the 2D signals. As a matter of fact, optical flow images are first calculated from consecutive frames and described by pre-trained CNN networks. Their dimension is then reduced using PCA. By stacking the description vectors beside each other, a multi-channel time series is created for each video. Each channel of the time series represents a specific feature and follows it over time. The main focus of the proposed method is to classify the obtained time series effectively. Towards this, the idea is to let the machine learn temporal features. This is done by training a multi-channel one dimensional Convolutional Neural Network (1D-CNN). The 1D-CNN learns the features along the only temporal dimension. Hence, the number of training parameters decreases significantly which would result in the trainability of the method on even smaller datasets. It is illustrated that the proposed method could reach the state-of-the-art results on two public datasets UCF11, jHMDB and competitive results on HMDB51.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.