Human Action Recognition from Various Data Modalities:A Review

Sun, Zehua and Liu, Jun and Ke, Qiuhong and Rahmani, Hossein and Bennamoun, Mohammed and Wang, Gang (2020) Human Action Recognition from Various Data Modalities:A Review. arXiv. ISSN 2331-8422

Text (2012.11866v2)
2012.11866v2.pdf - Published Version
Available under License Creative Commons Attribution-NonCommercial.

Download (4MB)


Human Action Recognition (HAR), aiming to understand human behaviors and then assign category labels, has a wide range of applications, and thus has been attracting increasing attention in the field of computer vision. Generally, human actions can be represented using various data modalities, such as RGB, skeleton, depth, infrared sequence, point cloud, event stream, audio, acceleration, radar, and WiFi, etc., which encode different sources of useful yet distinct information and have various advantages and application scenarios. Consequently, lots of existing works have attempted to investigate different types of approaches for HAR using various modalities. In this paper, we give a comprehensive survey for HAR from the perspective of the input data modalities. Specifically, we review both the hand-crafted feature-based and deep learning-based methods for single data modalities, and also review the methods based on multiple modalities, including the fusion-based frameworks and the co-learning-based approaches. The current benchmark datasets for HAR are also introduced. Finally, we discuss some potentially important research directions in this area.

Item Type:
Journal Article
Journal or Publication Title:
ID Code:
Deposited By:
Deposited On:
18 Jan 2021 11:30
Last Modified:
25 Oct 2021 04:44