Kong, Y. and Li, L. and Zhang, K. and Ni, Q. and Han, J. (2019) Attention module-based spatial-temporal graph convolutional networks for skeleton-based action recognition. Journal of Electronic Imaging, 28 (4): 043032. ISSN 1017-9909
Attention_module_paper.pdf - Accepted Version
Available under License Creative Commons Attribution-NonCommercial.
Download (2MB)
Abstract
Skeleton-based action recognition is a significant direction of human action recognition, because the skeleton contains important information for recognizing action. The spatial-temporal graph convolutional networks (ST-GCN) automatically learn both the temporal and spatial features from the skeleton data and achieve remarkable performance for skeleton-based action recognition. However, ST-GCN just learns local information on a certain neighborhood but does not capture the correlation information between all joints (i.e., global information). Therefore, we need to introduce global information into the ST-GCN. We propose a model of dynamic skeletons called attention module-based-ST-GCN, which solves these problems by adding attention module. The attention module can capture some global information, which brings stronger expressive power and generalization capability. Experimental results on two large-scale datasets, Kinetics and NTU-RGB+D, demonstrate that our model achieves significant improvements over previous representative methods. © 2019 SPIE and IS&T.