Else-Net : Elastic Semantic Network for Continual Action Recognition from Skeleton Data

Li, Tianjiao and Ke, Qiuhong and Rahmani, Hossein and Ho, Rui En and Ding, Henghui and Liu, Jun (2022) Else-Net : Elastic Semantic Network for Continual Action Recognition from Skeleton Data. In: 2021 IEEE/CVF International Conference on Computer Vision (ICCV) :. IEEE, pp. 13414-13423. ISBN 9781665428132

[thumbnail of Li_Else-Net_Elastic_Semantic_Network_for_Continual_Action_Recognition_From_Skeleton_ICCV_2021_paper]
Text (Li_Else-Net_Elastic_Semantic_Network_for_Continual_Action_Recognition_From_Skeleton_ICCV_2021_paper)
Li_Else_Net_Elastic_Semantic_Network_for_Continual_Action_Recognition_From_Skeleton_ICCV_2021_paper.pdf - Published Version
Available under License Unspecified.

Download (845kB)

Abstract

Most of the state-of-the-art action recognition methods focus on offline learning, where the samples of all types of actions need to be provided at once. Here, we address continual learning of action recognition, where various types of new actions are continuously learned over time. This task is quite challenging, owing to the catastrophic forgetting problem stemming from the discrepancies between the previously learned actions and current new actions to be learned. Therefore, we propose Else-Net, a novel Elastic Semantic Network with multiple learning blocks to learn diversified human actions over time. Specifically, our ElseNet is able to automatically search and update the most relevant learning blocks w.r.t. the current new action, or explore new blocks to store new knowledge, preserving the unmatched ones to retain the knowledge of previously learned actions and alleviates forgetting when learning new actions. Moreover, even though different human actions may vary to a large extent as a whole, their local body parts can still share many homogeneous features. Inspired by this, our proposed Else-Net mines the shared knowledge of the decomposed human body parts from different actions, which benefits continual learning of actions. Experiments show that the proposed approach enables effective continual action recognition and achieves promising performance on two large-scale action recognition datasets

Item Type:
Contribution in Book/Report/Proceedings
ID Code:
160606
Deposited By:
Deposited On:
31 Oct 2022 15:05
Refereed?:
Yes
Published?:
Published
Last Modified:
10 Aug 2024 23:27