Conference paper Open Access

Estimating post-editing effort: a study on human judgements, task-based and reference-based metrics of MT quality

Scarton, Scarton; Forcada, Mikel L.; Esplà-Gomis, Miquel; Specia, Lucia


Citation Style Language JSON Export

{
  "publisher": "Zenodo", 
  "DOI": "10.5281/zenodo.3525003", 
  "language": "eng", 
  "title": "Estimating post-editing effort: a study on human judgements, task-based and reference-based metrics of MT quality", 
  "issued": {
    "date-parts": [
      [
        2019, 
        11, 
        2
      ]
    ]
  }, 
  "abstract": "<p>Devising metrics to assess translation quality has always been at the core of machine translation (MT) research. Traditional automatic reference-based metrics, such as BLEU, have shown correlations with human judgements of adequacy and fluency and have been paramount for the advancement of MT system development. Crowd-sourcing has popularised and enabled the scalability of metrics based on human judgments, such as subjective&nbsp;direct assessments&nbsp;(DA) of adequacy, that are believed to be more reliable than reference-based automatic metrics. Finally, task-based measurements, such as post-editing time, are expected to provide a more de- tailed evaluation of the usefulness of translations for a specific task. Therefore, while DA averages adequacy&nbsp;judgements&nbsp;to obtain an appraisal of (perceived) quality independently of the task, and reference-based automatic metrics try to objectively estimate quality also in a task-independent way, task-based metrics are&nbsp;measurements&nbsp;obtained either during or after performing a specific task. In this paper we argue that, although expensive, task-based measurements are the most reliable when estimating MT quality in a specific task; in our case, this task is post-editing. To that end, we report experiments on a dataset with newly-collected post-editing indicators and show their usefulness when estimating post-editing effort. Our results show that task-based metrics comparing machine-translated and post-edited versions are the best at tracking post-editing effort, as expected. These metrics are followed by DA, and then by metrics comparing the machine-translated version and independent references. We suggest that MT practitioners should be aware of these differences and acknowledge their implications when decid- ing how to evaluate MT for post-editing purposes.</p>", 
  "author": [
    {
      "family": "Scarton, Scarton"
    }, 
    {
      "family": "Forcada, Mikel L."
    }, 
    {
      "family": "Espl\u00e0-Gomis, Miquel"
    }, 
    {
      "family": "Specia, Lucia"
    }
  ], 
  "type": "paper-conference", 
  "id": "3525003"
}
165
133
views
downloads
All versions This version
Views 165165
Downloads 133133
Data volume 57.0 MB57.0 MB
Unique views 145145
Unique downloads 117117

Share

Cite as