Conference paper Open Access

Estimating post-editing effort: a study on human judgements, task-based and reference-based metrics of MT quality

Scarton, Scarton; Forcada, Mikel L.; Esplà-Gomis, Miquel; Specia, Lucia


DataCite XML Export

<?xml version='1.0' encoding='utf-8'?>
<resource xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://datacite.org/schema/kernel-4" xsi:schemaLocation="http://datacite.org/schema/kernel-4 http://schema.datacite.org/meta/kernel-4.1/metadata.xsd">
  <identifier identifierType="DOI">10.5281/zenodo.3525003</identifier>
  <creators>
    <creator>
      <creatorName>Scarton, Scarton</creatorName>
      <givenName>Scarton</givenName>
      <familyName>Scarton</familyName>
      <affiliation>Department of Computer Science, University of Sheffield, Sheffield S1 4DP, UK</affiliation>
    </creator>
    <creator>
      <creatorName>Forcada, Mikel L.</creatorName>
      <givenName>Mikel L.</givenName>
      <familyName>Forcada</familyName>
      <affiliation>Dept. Llenguatges i Sist. Inform., Universitat d'Alacant, 03690 St. Vicent del Raspeig, Spain</affiliation>
    </creator>
    <creator>
      <creatorName>Esplà-Gomis, Miquel</creatorName>
      <givenName>Miquel</givenName>
      <familyName>Esplà-Gomis</familyName>
      <affiliation>Dept. Llenguatges i Sist. Inform., Universitat d'Alacant, 03690 St. Vicent del Raspeig, Spain</affiliation>
    </creator>
    <creator>
      <creatorName>Specia, Lucia</creatorName>
      <givenName>Lucia</givenName>
      <familyName>Specia</familyName>
      <affiliation>Department of Computer Science, University of Sheffield, Sheffield S1 4DP, UK &amp; Department of Computing, Imperial College London, London SW7 2AZ, UK</affiliation>
    </creator>
  </creators>
  <titles>
    <title>Estimating post-editing effort: a study on human judgements, task-based and reference-based metrics of MT quality</title>
  </titles>
  <publisher>Zenodo</publisher>
  <publicationYear>2019</publicationYear>
  <dates>
    <date dateType="Issued">2019-11-02</date>
  </dates>
  <language>en</language>
  <resourceType resourceTypeGeneral="ConferencePaper"/>
  <alternateIdentifiers>
    <alternateIdentifier alternateIdentifierType="url">https://zenodo.org/record/3525003</alternateIdentifier>
  </alternateIdentifiers>
  <relatedIdentifiers>
    <relatedIdentifier relatedIdentifierType="DOI" relationType="IsVersionOf">10.5281/zenodo.3525002</relatedIdentifier>
    <relatedIdentifier relatedIdentifierType="URL" relationType="IsPartOf">https://zenodo.org/communities/iwslt2019</relatedIdentifier>
  </relatedIdentifiers>
  <rightsList>
    <rights rightsURI="https://creativecommons.org/licenses/by/4.0/legalcode">Creative Commons Attribution 4.0 International</rights>
    <rights rightsURI="info:eu-repo/semantics/openAccess">Open Access</rights>
  </rightsList>
  <descriptions>
    <description descriptionType="Abstract">&lt;p&gt;Devising metrics to assess translation quality has always been at the core of machine translation (MT) research. Traditional automatic reference-based metrics, such as BLEU, have shown correlations with human judgements of adequacy and fluency and have been paramount for the advancement of MT system development. Crowd-sourcing has popularised and enabled the scalability of metrics based on human judgments, such as subjective&amp;nbsp;direct assessments&amp;nbsp;(DA) of adequacy, that are believed to be more reliable than reference-based automatic metrics. Finally, task-based measurements, such as post-editing time, are expected to provide a more de- tailed evaluation of the usefulness of translations for a specific task. Therefore, while DA averages adequacy&amp;nbsp;judgements&amp;nbsp;to obtain an appraisal of (perceived) quality independently of the task, and reference-based automatic metrics try to objectively estimate quality also in a task-independent way, task-based metrics are&amp;nbsp;measurements&amp;nbsp;obtained either during or after performing a specific task. In this paper we argue that, although expensive, task-based measurements are the most reliable when estimating MT quality in a specific task; in our case, this task is post-editing. To that end, we report experiments on a dataset with newly-collected post-editing indicators and show their usefulness when estimating post-editing effort. Our results show that task-based metrics comparing machine-translated and post-edited versions are the best at tracking post-editing effort, as expected. These metrics are followed by DA, and then by metrics comparing the machine-translated version and independent references. We suggest that MT practitioners should be aware of these differences and acknowledge their implications when decid- ing how to evaluate MT for post-editing purposes.&lt;/p&gt;</description>
  </descriptions>
</resource>
165
133
views
downloads
All versions This version
Views 165165
Downloads 133133
Data volume 57.0 MB57.0 MB
Unique views 145145
Unique downloads 117117

Share

Cite as