Semantics based English-Arabic machine translation evaluation
- 1. Department of Computer Science, Prince Abdullah Bin Ghazi Faculty of Information and Communication Technology, Al-Balqa Applied University, Salt, Jordan
- 2. Principal Architect – AI, Department of R Digital, R Systems International, Noida, India
- 3. Department of Information Technology, King Abdullah II School of Information Technology, University of Jordan, Amman, Jordan
- 4. Department of English Language and Literature, School of Foreign Languages, University of Jordan, Amman, Jordan
Description
Some classic machine translation (MT) Evaluation methods, such as the bilingual evaluation understudy score (BLEU), have notably underperformed in evaluating machine translations for morphologically rich languages like Arabic. However, the recent remarkable advancements in the domain of word vectors and sentence vectors have opened up new research avenues for low-resource languages. This paper proposes a novel linguistic-based evaluation method for English-translated sentences in Arabic. The proposed approach includes penalties based on length, positions, and context-based schemes such as part-of-speech tagging (POS) and multilingual sentenceBERT (SBERT) models for machine translation evaluation. The proposed technique is tested using pearson correlation as a performance evaluation parameter and compared with state-of-the-art techniques. The experimental results demonstrate that the proposed model evidently outperforms other MT evaluation methods such as BLEU.
Files
23 28806 v27i1 Jul22.pdf
Files
(589.4 kB)
Name | Size | Download all |
---|---|---|
md5:c244d80d7373e0d09ea0a5c75f8dcf02
|
589.4 kB | Preview Download |