Skeleton-Based Human Action Recognition with Global Context-Aware Attention LSTM Networks

Liu, Jun and Wang, G. and Duan, L.-Y. and Abdiyeva, K. and Kot, A.C. (2018) Skeleton-Based Human Action Recognition with Global Context-Aware Attention LSTM Networks. IEEE Transactions on Image Processing, 27 (4). pp. 1586-1599. ISSN 1057-7149

Full text not available from this repository.

Abstract

Human action recognition in 3D skeleton sequences has attracted a lot of research attention. Recently, long short-term memory (LSTM) networks have shown promising performance in this task due to their strengths in modeling the dependencies and dynamics in sequential data. As not all skeletal joints are informative for action recognition, and the irrelevant joints often bring noise which can degrade the performance, we need to pay more attention to the informative ones. However, the original LSTM network does not have explicit attention ability. In this paper, we propose a new class of LSTM network, global context-aware attention LSTM, for skeleton-based action recognition, which is capable of selectively focusing on the informative joints in each frame by using a global context memory cell. To further improve the attention capability, we also introduce a recurrent attention mechanism, with which the attention performance of our network can be enhanced progressively. Besides, a two-stream framework, which leverages coarse-grained attention and fine-grained attention, is also introduced. The proposed method achieves state-of-the-art performance on five challenging datasets for skeleton-based action recognition.

Item Type:
Journal Article
Journal or Publication Title:
IEEE Transactions on Image Processing
Uncontrolled Keywords:
/dk/atira/pure/subjectarea/asjc/1700/1704
Subjects:
?? computer graphics and computer-aided designsoftware ??
ID Code:
223139
Deposited By:
Deposited On:
16 Aug 2024 14:10
Refereed?:
Yes
Published?:
Published
Last Modified:
16 Aug 2024 14:10