Evaluating intelligibility in human translation and machine translation

Research output: Contribution to journalReview article

Abstract

Research in automated translation mostly aims to develop translation systems to further enhance the transfer of knowledge and information. This need of transfer has brought machine translation (MT) to show major steps in translation software development and encourages further research in various MT related areas. However, there have been no focused investigations of criteria for evaluation particularly evaluation that considers human evaluators and the reconciliation of human translation (HT) and MT. Thus, focusing on two attributes for evaluation, namely Accuracy and Intelligibility, a study was conducted to investigate translation evaluation criteria for content and language transfer through reconciliation of HT and MT evaluation based on human evaluators' perception. The study focused on human evaluators' expectation of range of criteria for HT and MT under the two attributes and the evaluation was tested on a machine system to observe the system's performance in terms of Accuracy and Intelligibility. This paper reports the range of criteria to evaluate translation in terms of Intelligibility as expected by human evaluators in HT and MT in terms of content and language transfer. The study uses a mixed method approach with soft data and hard data collection. The results demonstrate that the range of each criteria identified for content evaluation in HT is expected to be higher than in MT. The implications of the study are described to provide an understanding of evaluation for human and automated translation in terms of Intelligibility.

Original languageEnglish
Pages (from-to)251-264
Number of pages14
Journal3L: Language, Linguistics, Literature
Volume23
Issue number4
DOIs
Publication statusPublished - 1 Jan 2017

Fingerprint

evaluation
Intelligibility
Machine Translation
Evaluation
reconciliation
software development
language
Reconciliation
Language Transfer
knowledge
performance
Knowledge Transfer
Data Collection
Translation System
Software Development
Mixed Methods

Keywords

  • Criteria
  • Evaluation
  • Human translation
  • Intelligibility
  • Machine translation

ASJC Scopus subject areas

  • Language and Linguistics
  • Linguistics and Language
  • Literature and Literary Theory

Cite this

@article{fbe06a223e43401ca2cf09d6863ee357,
title = "Evaluating intelligibility in human translation and machine translation",
abstract = "Research in automated translation mostly aims to develop translation systems to further enhance the transfer of knowledge and information. This need of transfer has brought machine translation (MT) to show major steps in translation software development and encourages further research in various MT related areas. However, there have been no focused investigations of criteria for evaluation particularly evaluation that considers human evaluators and the reconciliation of human translation (HT) and MT. Thus, focusing on two attributes for evaluation, namely Accuracy and Intelligibility, a study was conducted to investigate translation evaluation criteria for content and language transfer through reconciliation of HT and MT evaluation based on human evaluators' perception. The study focused on human evaluators' expectation of range of criteria for HT and MT under the two attributes and the evaluation was tested on a machine system to observe the system's performance in terms of Accuracy and Intelligibility. This paper reports the range of criteria to evaluate translation in terms of Intelligibility as expected by human evaluators in HT and MT in terms of content and language transfer. The study uses a mixed method approach with soft data and hard data collection. The results demonstrate that the range of each criteria identified for content evaluation in HT is expected to be higher than in MT. The implications of the study are described to provide an understanding of evaluation for human and automated translation in terms of Intelligibility.",
keywords = "Criteria, Evaluation, Human translation, Intelligibility, Machine translation",
author = "{Md. Yusof}, Noraini and Saadiyah Darus and {Ab Aziz}, {Mohd Juzaiddin}",
year = "2017",
month = "1",
day = "1",
doi = "10.17576/3L-2017-2304-19",
language = "English",
volume = "23",
pages = "251--264",
journal = "3L: Language, Linguistics, Literature",
issn = "0128-5157",
publisher = "Penerbit Universiti Kebangsaan Malaysia",
number = "4",

}

TY - JOUR

T1 - Evaluating intelligibility in human translation and machine translation

AU - Md. Yusof, Noraini

AU - Darus, Saadiyah

AU - Ab Aziz, Mohd Juzaiddin

PY - 2017/1/1

Y1 - 2017/1/1

N2 - Research in automated translation mostly aims to develop translation systems to further enhance the transfer of knowledge and information. This need of transfer has brought machine translation (MT) to show major steps in translation software development and encourages further research in various MT related areas. However, there have been no focused investigations of criteria for evaluation particularly evaluation that considers human evaluators and the reconciliation of human translation (HT) and MT. Thus, focusing on two attributes for evaluation, namely Accuracy and Intelligibility, a study was conducted to investigate translation evaluation criteria for content and language transfer through reconciliation of HT and MT evaluation based on human evaluators' perception. The study focused on human evaluators' expectation of range of criteria for HT and MT under the two attributes and the evaluation was tested on a machine system to observe the system's performance in terms of Accuracy and Intelligibility. This paper reports the range of criteria to evaluate translation in terms of Intelligibility as expected by human evaluators in HT and MT in terms of content and language transfer. The study uses a mixed method approach with soft data and hard data collection. The results demonstrate that the range of each criteria identified for content evaluation in HT is expected to be higher than in MT. The implications of the study are described to provide an understanding of evaluation for human and automated translation in terms of Intelligibility.

AB - Research in automated translation mostly aims to develop translation systems to further enhance the transfer of knowledge and information. This need of transfer has brought machine translation (MT) to show major steps in translation software development and encourages further research in various MT related areas. However, there have been no focused investigations of criteria for evaluation particularly evaluation that considers human evaluators and the reconciliation of human translation (HT) and MT. Thus, focusing on two attributes for evaluation, namely Accuracy and Intelligibility, a study was conducted to investigate translation evaluation criteria for content and language transfer through reconciliation of HT and MT evaluation based on human evaluators' perception. The study focused on human evaluators' expectation of range of criteria for HT and MT under the two attributes and the evaluation was tested on a machine system to observe the system's performance in terms of Accuracy and Intelligibility. This paper reports the range of criteria to evaluate translation in terms of Intelligibility as expected by human evaluators in HT and MT in terms of content and language transfer. The study uses a mixed method approach with soft data and hard data collection. The results demonstrate that the range of each criteria identified for content evaluation in HT is expected to be higher than in MT. The implications of the study are described to provide an understanding of evaluation for human and automated translation in terms of Intelligibility.

KW - Criteria

KW - Evaluation

KW - Human translation

KW - Intelligibility

KW - Machine translation

UR - http://www.scopus.com/inward/record.url?scp=85040367062&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85040367062&partnerID=8YFLogxK

U2 - 10.17576/3L-2017-2304-19

DO - 10.17576/3L-2017-2304-19

M3 - Review article

VL - 23

SP - 251

EP - 264

JO - 3L: Language, Linguistics, Literature

JF - 3L: Language, Linguistics, Literature

SN - 0128-5157

IS - 4

ER -