Conference paper Open Access
Scarton, Scarton; Forcada, Mikel L.; Esplà-Gomis, Miquel; Specia, Lucia
<?xml version='1.0' encoding='utf-8'?> <oai_dc:dc xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:oai_dc="http://www.openarchives.org/OAI/2.0/oai_dc/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.openarchives.org/OAI/2.0/oai_dc/ http://www.openarchives.org/OAI/2.0/oai_dc.xsd"> <dc:creator>Scarton, Scarton</dc:creator> <dc:creator>Forcada, Mikel L.</dc:creator> <dc:creator>Esplà-Gomis, Miquel</dc:creator> <dc:creator>Specia, Lucia</dc:creator> <dc:date>2019-11-02</dc:date> <dc:description>Devising metrics to assess translation quality has always been at the core of machine translation (MT) research. Traditional automatic reference-based metrics, such as BLEU, have shown correlations with human judgements of adequacy and fluency and have been paramount for the advancement of MT system development. Crowd-sourcing has popularised and enabled the scalability of metrics based on human judgments, such as subjective direct assessments (DA) of adequacy, that are believed to be more reliable than reference-based automatic metrics. Finally, task-based measurements, such as post-editing time, are expected to provide a more de- tailed evaluation of the usefulness of translations for a specific task. Therefore, while DA averages adequacy judgements to obtain an appraisal of (perceived) quality independently of the task, and reference-based automatic metrics try to objectively estimate quality also in a task-independent way, task-based metrics are measurements obtained either during or after performing a specific task. In this paper we argue that, although expensive, task-based measurements are the most reliable when estimating MT quality in a specific task; in our case, this task is post-editing. To that end, we report experiments on a dataset with newly-collected post-editing indicators and show their usefulness when estimating post-editing effort. Our results show that task-based metrics comparing machine-translated and post-edited versions are the best at tracking post-editing effort, as expected. These metrics are followed by DA, and then by metrics comparing the machine-translated version and independent references. We suggest that MT practitioners should be aware of these differences and acknowledge their implications when decid- ing how to evaluate MT for post-editing purposes.</dc:description> <dc:identifier>https://zenodo.org/record/3525003</dc:identifier> <dc:identifier>10.5281/zenodo.3525003</dc:identifier> <dc:identifier>oai:zenodo.org:3525003</dc:identifier> <dc:language>eng</dc:language> <dc:relation>doi:10.5281/zenodo.3525002</dc:relation> <dc:relation>url:https://zenodo.org/communities/iwslt2019</dc:relation> <dc:rights>info:eu-repo/semantics/openAccess</dc:rights> <dc:rights>https://creativecommons.org/licenses/by/4.0/legalcode</dc:rights> <dc:title>Estimating post-editing effort: a study on human judgements, task-based and reference-based metrics of MT quality</dc:title> <dc:type>info:eu-repo/semantics/conferencePaper</dc:type> <dc:type>publication-conferencepaper</dc:type> </oai_dc:dc>
All versions | This version | |
---|---|---|
Views | 165 | 165 |
Downloads | 133 | 133 |
Data volume | 57.0 MB | 57.0 MB |
Unique views | 145 | 145 |
Unique downloads | 117 | 117 |