Continuous 128-Dimensional Emotion Embeddings for Real-Time Voice AI
Authors/Creators
Description
We present a novel approach for extracting continuous 128-dimensional emotion embeddings from speech in real-time for emotion-conditioned text-to-speech synthesis. Unlike prior work using low-dimensional discrete emotion spaces (typically 3D arousal-valence-dominance), our learned high-dimensional representation captures nuanced emotional variations including speaker identity, intensity levels, and prosodic patterns within single emotion categories.
Our convolutional neural network architecture trained on the Emotional Speech Dataset achieves 99.7% classification accuracy across five emotion categories (neutral, happy, sad, angry, surprise) while producing distinguishable 128-dimensional vectors for different acoustic realizations of the same labeled emotion. Analysis reveals significant intra-category variance (mean std: 0.28 across dimensions), demonstrating the model captures speaker-specific characteristics and intensity gradations beyond categorical labels.
The continuous embedding space enables smooth interpolation between emotional states and intensity scaling within categories, addressing limitations of discrete emotion representations for natural text-to-speech generation. Applications include emotionally aware voice assistants, customer service de-escalation systems, and therapeutic voice interfaces where fine-grained emotional control is critical.
Files
OneBudd_EmotionEmbedding_Paper (1).pdf
Files
(751.4 kB)
| Name | Size | Download all |
|---|---|---|
|
md5:2d43d4b5f93fa831b8651940c015a027
|
751.4 kB | Preview Download |