Khan, Anam Ahmad and Newn, Joshua and Bailey, James and Velloso, Eduardo (2022) Integrating Gaze and Speech for Enabling Implicit Interactions. In: CHI '22: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems :. ACM, New York, pp. 1-14. ISBN 9781450391573
Full text not available from this repository.Abstract
Gaze and speech are rich contextual sources of information that, when combined, can result in effective and rich multimodal interactions. This paper proposes a machine learning-based pipeline that leverages and combines users’ natural gaze activity, the semantic knowledge from their vocal utterances and the synchronicity between gaze and speech data to facilitate users’ interaction. We evaluated our proposed approach on an existing dataset, which involved 32 participants recording voice notes while reading an academic paper. Using a Logistic Regression classifier, we demonstrate that our proposed multimodal approach maps voice notes with accurate text passages with an average F1-Score of 0.90. Our proposed pipeline motivates the design of multimodal interfaces that combines natural gaze and speech patterns to enable robust interactions.