Published November 12, 2024 | Version v1
Conference paper Open

Authorship Obfuscation in Multilingual Machine-Generated Text Detection

  • 1. ROR icon Kempelen Institute of Intelligent Technologies
  • 2. ROR icon MIT Lincoln Laboratory
  • 3. ROR icon Pennsylvania State University

Description

High-quality text generation capability of latest Large Language Models (LLMs) causes concerns about their misuse (e.g., in massive generation/spread of disinformation). Machine-generated text (MGT) detection is important to cope with such threats. However, it is susceptible to authorship obfuscation (AO) methods, such as paraphrasing, which can cause MGTs to evade detection. So far, this was evaluated only in monolingual settings. Thus, the susceptibility of recently proposed multilingual detectors is still unknown. We fill this gap by comprehensively benchmarking the performance of 10 well-known AO methods, attacking 37 MGT detection methods against MGTs in 11 languages (i.e., 10 × 37 × 11 = 4,070 combinations). We also evaluate the effect of data augmentation on adversarial robustness using obfuscated texts. The results indicate that all tested AO methods can cause evasion of automated detection in all tested languages, where homoglyph attacks are especially successful. However, some of the AO methods severely damaged the text, making it no longer readable or easily recognizable by humans (e.g., changed language, weird characters).

Files

2024.findings-emnlp.369.pdf

Files (623.6 kB)

Name Size Download all
md5:e33c2c1458dc871281ef4897cd786d77
623.6 kB Preview Download

Additional details

Funding

VIGILANT – Vital IntelliGence to Investigate ILlegAl DisiNformaTion 101073921
European Commission
AI-CODE – AI-CODE - AI services for COntinuous trust in emerging Digital Environments 101135437
European Commission