Published August 11, 2024 | Version v1
Conference paper Open

Effects of diversity incentives on sample diversity and downstream model performance in LLM-based text augmentation

  • 1. Kempelen Institute of Intelligent Technologies
  • 2. Brno University of Technology
  • 3. ROR icon University of Pittsburgh

Description

The latest generative large language models (LLMs) have found their application in data augmentation tasks, where small numbers of text samples are LLM-paraphrased and then used to fine-tune downstream models. However, more research is needed to assess how different prompts, seed data selection strategies, filtering methods, or model settings affect the quality of paraphrased data (and downstream models). In this study, we investigate three text diversity incentive methods well established in crowdsourcing: taboo words, hints by previous outlier solutions, and chaining on previous outlier solutions. Using these incentive methods as part of instructions to LLMs augmenting text datasets, we measure their effects on generated texts’ lexical diversity and downstream model performance. We compare the effects over 5 different LLMs, 6 datasets and 2 downstream models. We show that diversity is most increased by taboo words, but downstream model performance is highest with hints.

Files

2024.acl-long.710.pdf

Files (1.8 MB)

Name Size Download all
md5:31c2c95df66b2b44a8c323d5bb65ae72
1.8 MB Preview Download

Additional details

Funding

AI-CODE – AI-CODE - AI services for COntinuous trust in emerging Digital Environments 101135437
European Commission
VIGILANT – Vital IntelliGence to Investigate ILlegAl DisiNformaTion 101073921
European Commission
vera.ai – vera.ai: VERification Assisted by Artificial Intelligence 101070093
European Commission
Modermed APVV-22-0414
Slovak Research and Development Agency