Journal article Open Access

Bimodal Emotion Recognition using Machine Learning

Manisha S,; Nafisa Saida H; Nandita Gopal; Roshni P Anand

Sponsor(s)
Blue Eyes Intelligence Engineering and Sciences Publication(BEIESP)

The predominant communication channel to convey relevant and high impact information is the emotions that is embedded on our communications. Researchers have tried to exploit these emotions in recent years for human robot interactions (HRI) and human computer interactions (HCI). Emotion recognition through speech or through facial expression is termed as single mode emotion recognition. The rate of accuracy of these single mode emotion recognitions are improved using the proposed bimodal method by combining the modalities of speech and facing and recognition of emotions using a Convolutional Neural Network (CNN) model. In this paper, the proposed bimodal emotion recognition system, contains three major parts such as processing of audio, processing of video and fusion of data for detecting the emotion of a person. The fusion of visual information and audio data obtained from two different channels enhances the emotion recognition rate by providing the complementary data. The proposed method aims to classify 7 basic emotions (anger, disgust, fear, happy, neutral, sad, surprise) from an input video. We take audio and image frame from the video input to predict the final emotion of a person. The dataset used is an audio-visual dataset uniquely suited for the study of multi-modal emotion expression and perception. Dataset used here is RAVDESS dataset which contains audio-visual dataset, visual dataset and audio dataset. For bimodal emotion detection the audio-visual dataset is used.

Files (675.0 kB)
Name Size
D24510410421.pdf
md5:350c9eb208681e55024d10ae511bd730
675.0 kB Download
13
9
views
downloads
Views 13
Downloads 9
Data volume 6.1 MB
Unique views 13
Unique downloads 9

Share

Cite as