Conference paper Open Access
Kyriaki Pantelidou;
Despoina Chatzakou;
Theodora Tsikrika;
Stefanos Vrochidis;
Ioannis Kompatsiaris
The often observed unavailability of large amounts of training data typically required by deep learning models to perform well in the context of NLP tasks has given rise to the exploration of data augmentation techniques. Originally, such techniques mainly focused on rule-based methods (e.g. random insertion/deletion of words) or synonym replacement with the help of lexicons. More recently, model-based techniques which involve the use of non-contextual (e.g. Word2Vec, GloVe) or contextual (e.g. BERT) embeddings seem to be gaining ground as a more effective way of word replacement. For BERT, in particular, which has been employed successfully in various NLP tasks, data augmentation is typically performed by applying a masking approach where an arbitrary number of word positions is selected to replace words with others of the same meaning. Considering that the words selected for substitution are bound to affect the final outcome, this work examines different ways of selecting the words to be replaced by emphasizing different parts of a sentence, namely specific parts of speech or words that carry more sentiment information. Our goal is to study the effect of selecting the words to be substituted during data augmentation on the final performance of a classification model. Evaluation experiments performed for binary classification tasks on two benchmark datasets indicate improvements in the effectiveness against state-of-the-art baselines.
Name | Size | |
---|---|---|
Selective_Word_Substitution_for_Contextualized_Data_Augmentation.pdf
md5:397ee79b3c31490b4057f8953dd9cf91 |
161.5 kB | Download |
All versions | This version | |
---|---|---|
Views | 110 | 110 |
Downloads | 81 | 81 |
Data volume | 13.1 MB | 13.1 MB |
Unique views | 85 | 85 |
Unique downloads | 74 | 74 |