Do We Need a Classifier? Dual Objectives Go Beyond Baselines in Fine-Grained Emotion Classification
Authors/Creators
Description
Fine-grained emotion classification refers to the task of identifying and distinguishing among a large number of subtle emotional states with nuanced differences. The classification scope extends substantially beyond coarse-grained categories, such as broad valence (e.g., positive, negative) or limited basic emotion taxonomies (e.g., Ekman [3], Plutchik [4]). The standard approach today for this classification task involves fine-tuning a pre-trained language model (e.g., BERT) with a classifier head, using a standard cross-entropy loss. In this work, we revisit the foundations of emotion modeling and propose a novel alternative approach to fine-grained emotion recognition from text. We reframe this multi-label classification as a semantic similarity estimation with a contrastive objective between the text and emotion labels, and develop our model without employing any classifier head. This model, trained with the mentioned approach, is found to surpass several existing baseline models on fine-grained emotion detection benchmarks. Building on this insight, we introduce a dual-objective framework that jointly optimizes similarity alignment along with a classification objective, which enables a better understanding of emotional semantics and effective handling of class imbalance. Our experiments demonstrate that the proposed formulation yields consistent performance improvements and achieves state-of-the-art performance among existing baselines, with macro-F1 scores of 0.56 on GoEmotions and 0.61 on SemEval-2018 Task-1C. Additionally, we evaluate our approach on the EmoPillars dataset, demonstrating the robustness and generalizability of the proposed methods across multiple datasets.
Files
Capstone Research.pdf
Files
(2.5 MB)
| Name | Size | Download all |
|---|---|---|
|
md5:1b994fc393221b4edf9e5e27106f1ab5
|
2.5 MB | Preview Download |