HNSE-O1-2. How Facial Features and Head Gesture Convey Attention in Stationary Environments

Janelle Domantay1
Faculty Mentor: Brendan Morris, Ph.D.2
1Howard R. Hughes College of Engineering, Department of Computer Science
2Howard R. Hughes College of Engineering, Department of Electrical and Computer Engineering

Awareness detection technologies have been gaining traction in a variety of enterprises; Most often used for driver fatigue detection, recent research has shifted towards using computer vision technologies to analyze user attention in stationary environments such as online classrooms. This study aims to extend previous research on distraction detection by analyzing which visual features contribute most to predicting awareness and fatigue. We utilized the open source facial analysis toolkit OpenFace in order to analyze visual data of subjects at varying levels of attentiveness. Then, using a Support Vector Machine (SVM) we created several prediction models for user attention and identified Histogram of Oriented Gradients (HOGS) to be the greatest predictor of the features we tested. We also compared the performance of this SVM to deep learning approaches that utilize Convolutional and/or Recurrent neural networks (CNN’s and CRNN’s). Interestingly, CRNN’s did not appear to perform significantly better than their CNN counterparts. While deep learning methods definitively performed better, SVMs utilized less resources and, using certain parameters, were able to approach the performance of deep learning methods.


Nov 15 - 19 2021


All Day


HNSE: Podium Session 1
The Office of Undergraduate Research


The Office of Undergraduate Research


Leave a Reply

Your email address will not be published. Required fields are marked *