Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 60 tok/s
Gemini 2.5 Pro 51 tok/s Pro
GPT-5 Medium 18 tok/s Pro
GPT-5 High 14 tok/s Pro
GPT-4o 77 tok/s Pro
Kimi K2 159 tok/s Pro
GPT OSS 120B 456 tok/s Pro
Claude Sonnet 4 38 tok/s Pro
2000 character limit reached

Learning Coupled Spatial-temporal Attention for Skeleton-based Action Recognition (1909.10214v1)

Published 23 Sep 2019 in cs.CV

Abstract: In this paper, we propose a coupled spatial-temporal attention (CSTA) model for skeleton-based action recognition, which aims to figure out the most discriminative joints and frames in spatial and temporal domains simultaneously. Conventional approaches usually consider all the joints or frames in a skeletal sequence equally important, which are unrobust to ambiguous and redundant information. To address this, we first learn two sets of weights for different joints and frames through two subnetworks respectively, which enable the model to have the ability of "paying attention to" the relatively informative section. Then, we calculate the cross product based on the weights of joints and frames for the coupled spatial-temporal attention. Moreover, our CSTA mechanisms can be easily plugged into existing hierarchical CNN models (CSTA-CNN) to realize their function. Extensive experimental results on the recently collected UESTC dataset and the currently largest NTU dataset have shown the effectiveness of our proposed method for skeleton-based action recognition.

Citations (2)

Summary

We haven't generated a summary for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Lightbulb On Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (1)