AI-Driven Video-Based Emotion Classification Algorithm for Maintenance Workers (DESDEMONA PRIN 2022 PNRR)
Description
This repository contains the beta-version of an AI-driven algorithm developed within the DESDEMONA project for the automatic identification of emotional states of maintenance workers during task execution. The algorithm processes video recordings of operators performing maintenance activities and applies a multi-stage deep learning pipeline to detect faces, classify facial expressions, and extract time-dependent emotional indicators.
The system performs frame-by-frame analysis of video data using a multi-backend face detection strategy combined with deep learning–based emotion recognition. For each processed frame, the algorithm estimates the probability of seven basic emotions (neutral, happiness, sadness, anger, fear, disgust, and surprise). These emotion scores are then aggregated over time to generate dynamic emotional indicators describing the evolution of workers’ emotional states during task execution.
To improve robustness in realistic industrial environments, the algorithm integrates multiple face detection frameworks and implements temporal downsampling strategies to optimize computational efficiency. The extracted emotional indicators can be mapped to psychological dimensions such as valence and arousal, enabling the analysis of emotional stability, transitions, and intensity throughout the task.
The algorithm is implemented in Python and designed to support offline video analysis while remaining compatible with real-time emotion recognition systems and decision support platforms. In the DESDEMONA architecture, the extracted emotional indicators are intended to complement other human-centered metrics such as cognitive workload and operator performance, enabling more adaptive and context-aware maintenance decision support.
Files
Additional details
Software
- Programming language
- Python