10.5281/zenodo.3525003
https://zenodo.org/records/3525003
oai:zenodo.org:3525003
Scarton, Scarton
Scarton
Scarton
Department of Computer Science, University of Sheffield, Sheffield S1 4DP, UK
Forcada, Mikel L.
Mikel L.
Forcada
Dept. Llenguatges i Sist. Inform., Universitat d'Alacant, 03690 St. Vicent del Raspeig, Spain
Esplà-Gomis, Miquel
Miquel
Esplà-Gomis
Dept. Llenguatges i Sist. Inform., Universitat d'Alacant, 03690 St. Vicent del Raspeig, Spain
Specia, Lucia
Lucia
Specia
Department of Computer Science, University of Sheffield, Sheffield S1 4DP, UK & Department of Computing, Imperial College London, London SW7 2AZ, UK
Estimating post-editing effort: a study on human judgements, task-based and reference-based metrics of MT quality
Zenodo
2019
2019-11-02
eng
10.5281/zenodo.3525002
https://zenodo.org/communities/iwslt2019
Creative Commons Attribution 4.0 International
Devising metrics to assess translation quality has always been at the core of machine translation (MT) research. Traditional automatic reference-based metrics, such as BLEU, have shown correlations with human judgements of adequacy and fluency and have been paramount for the advancement of MT system development. Crowd-sourcing has popularised and enabled the scalability of metrics based on human judgments, such as subjective direct assessments (DA) of adequacy, that are believed to be more reliable than reference-based automatic metrics. Finally, task-based measurements, such as post-editing time, are expected to provide a more de- tailed evaluation of the usefulness of translations for a specific task. Therefore, while DA averages adequacy judgements to obtain an appraisal of (perceived) quality independently of the task, and reference-based automatic metrics try to objectively estimate quality also in a task-independent way, task-based metrics are measurements obtained either during or after performing a specific task. In this paper we argue that, although expensive, task-based measurements are the most reliable when estimating MT quality in a specific task; in our case, this task is post-editing. To that end, we report experiments on a dataset with newly-collected post-editing indicators and show their usefulness when estimating post-editing effort. Our results show that task-based metrics comparing machine-translated and post-edited versions are the best at tracking post-editing effort, as expected. These metrics are followed by DA, and then by metrics comparing the machine-translated version and independent references. We suggest that MT practitioners should be aware of these differences and acknowledge their implications when decid- ing how to evaluate MT for post-editing purposes.