Learning Computational Models of Video Memorability from fMRI Brain Imaging

Han, Junwei and Chen, Changyuan and Shao, Ling and Hu, Xintao and Han, Jungong and Liu, Tianming (2015) Learning Computational Models of Video Memorability from fMRI Brain Imaging. IEEE Transactions on Cybernetics, 45 (8). pp. 1692-1703. ISSN 2168-2267

Full text not available from this repository.

Abstract

Generally, various visual media are unequally memorable by the human brain. This paper looks into a new direction of modeling the memorability of video clips and automatically predicting how memorable they are by learning from brain functional magnetic resonance imaging (fMRI). We propose a novel computational framework by integrating the power of low-level audiovisual features and brain activity decoding via fMRI. Initially, a user study experiment is performed to create a ground truth database for measuring video memorability and a set of effective low-level audiovisual features is examined in this database. Then, human subjects' brain fMRI data are obtained when they are watching the video clips. The fMRI-derived features that convey the brain activity of memorizing videos are extracted using a universal brain reference system. Finally, due to the fact that fMRI scanning is expensive and time-consuming, a computational model is learned on our benchmark dataset with the objective of maximizing the correlation between the low-level audiovisual features and the fMRI-derived features using joint subspace learning. The learned model can then automatically predict the memorability of videos without fMRI scans. Evaluations on publically available image and video databases demonstrate the effectiveness of the proposed framework.

Item Type:
Journal Article
Journal or Publication Title:
IEEE Transactions on Cybernetics
Uncontrolled Keywords:
/dk/atira/pure/subjectarea/asjc/1700/1709
Subjects:
ID Code:
88014
Deposited By:
Deposited On:
06 Oct 2017 19:38
Refereed?:
Yes
Published?:
Published
Last Modified:
05 Aug 2020 05:30