Combining Low and Mid-Level Gaze Features for Desktop Activity Recognition

Srivastava, Namrata and Newn, Joshua and Velloso, Eduardo (2018) Combining Low and Mid-Level Gaze Features for Desktop Activity Recognition. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies: 189. pp. 1-27. ISSN 2474-9567

Full text not available from this repository.

Abstract

Human activity recognition (HAR) is an important research area due to its potential for building context-aware interactive systems. Though movement-based activity recognition is an established area of research, recognising sedentary activities remains an open research question. Previous works have explored eye-based activity recognition as a potential approach for this challenge, focusing on statistical measures derived from eye movement properties---low-level gaze features---or some knowledge of the Areas-of-Interest (AOI) of the stimulus---high-level gaze features. In this paper, we extend this body of work by employing the addition of mid-level gaze features; features that add a level of abstraction over low-level features with some knowledge of the activity, but not of the stimulus. We evaluated our approach on a dataset collected from 24 participants performing eight desktop computing activities. We trained a classifier extending 26 low-level features derived from existing literature with the addition of 24 novel candidate mid-level gaze features. Our results show an overall classification performance of 0.72 (F1-Score), with up to 4% increase in accuracy when adding our mid-level gaze features. Finally, we discuss the implications of combining low- and mid-level gaze features, as well as the future directions for eye-based activity recognition.

Item Type:
Journal Article
Journal or Publication Title:
Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies
ID Code:
166878
Deposited By:
Deposited On:
03 Mar 2022 09:15
Refereed?:
Yes
Published?:
Published
Last Modified:
15 Jul 2024 22:24