Zhou, F. and Jiang, Z. and Liu, Z. and Chen, F. and Chen, L. and Tong, L. and Yang, Z. and Wang, H. and Fei, M. and Li, L. and Zhou, H. (2022) Structured Context Enhancement Network for Mouse Pose Estimation. IEEE Transactions on Circuits and Systems for Video Technology, 32 (5). pp. 2787-2801. ISSN 1051-8215
Structured_Context_Enhancement_Network_for_Mouse_Pose_Estimation_IEEE.pdf - Accepted Version
Available under License Creative Commons Attribution-NonCommercial.
Download (11MB)
Abstract
Automated analysis of mouse behaviours is crucial for many applications in neuroscience. However, quantifying mouse behaviours from videos or images remains a challenging problem, where pose estimation plays an important role in describing mouse behaviours. Although deep learning based methods have made promising advances in human pose estimation, they cannot be directly applied to pose estimation of mice due to different physiological natures. Particularly, since mouse body is highly deformable, it is a challenge to accurately locate different keypoints on the mouse body. In this paper, we propose a novel Hourglass network based model, namely Graphical Model based Structured Context Enhancement Network (GMSCENet) where two effective modules, i.e., Structured Context Mixer (SCM) and Cascaded Multi-level Supervision (CMLS) are subsequently implemented. SCM can adaptively learn and enhance the proposed structured context information of each mouse part by a novel graphical model that takes into account the motion difference between body parts. Then, the CMLS module is designed to jointly train the proposed SCM and the Hourglass network by generating multi-level information, increasing the robustness of the whole network. Using the multi-level prediction information from SCM and CMLS, we develop an inference method to ensure the accuracy of the localisation results. Finally, we evaluate our proposed approach against several baselines on our Parkinson’s Disease Mouse Behaviour (PDMB) and the standard DeepLabCut Mouse Pose datasets. The experimental results show that our method achieves better or competitive performance against the other state-of-the-art approaches.