There is a newer version of the record available.

Published June 3, 2023 | Version v1
Preprint Open

Investigating Data Generation Using Masking in Language Models

Creators

  • 1. Unaffiliated

Description

The current era of natural language processing (NLP) has been defined by the prominence of pre-trained language models since the advent of BERT. A feature of BERT and models with similar architecture is the objective of masked language modeling, in which part of the input is intentionally masked and the model is trained to predict this piece of masked information. Data augmentation is a data-driven technique widely used in machine learning, including research areas like computer vision and natural language processing, to improve model performance by artificially augmenting the training data set by designated techniques. Masked language models (MLM), an essential training feature of BERT, have introduced a novel approach to perform effective pre-training on Transformer based models in natural language processing tasks. Recent studies have utilized masked language model to generate artificially augmented data for NLP downstream tasks. The experimental results show that Mask based data augmentation method provides a simple but efficient approach to improve the model performance. In this paper, we explore and discuss the broader utilization of these data augmentation methods based on MLM.

Files

ma_2023_data_aug.pdf

Files (354.2 kB)

Name Size Download all
md5:2ade7ec0e9e1ca3aa0913c6b52196852
354.2 kB Preview Download