Chapter 7. Quality is in the eyes of the reviewer
-
Ana Guerberof
Abstract
As part of a larger research project exploring correlations between productivity, quality and experience in the post-editing of machine-translated and translation-memory outputs in a team of 24 professional translators, three reviewers were asked to review the translations/post-editions completed by these translators and to fill in the corresponding quality evaluation forms. The data obtained from the three reviewers’ evaluation was analysed in order to determine if there was agreement in terms of time as well as in number and type of errors marked to complete the task. The results show that there were statistically significant differences between reviewers, although there were also correlations on pairs of reviewers depending on the provenance of the text analysed. Reviewers tended to agree on the general number of errors found in the No match category but their agreement in Fuzzy and MT match was either weak or there was no agreement, perhaps indicating that the origin of the text might have influenced their evaluation. The reviewers also tended to agree on best and worst performers, but there was great disparity in the translators’ classifications if they were ranked according to the number of errors.
Abstract
As part of a larger research project exploring correlations between productivity, quality and experience in the post-editing of machine-translated and translation-memory outputs in a team of 24 professional translators, three reviewers were asked to review the translations/post-editions completed by these translators and to fill in the corresponding quality evaluation forms. The data obtained from the three reviewers’ evaluation was analysed in order to determine if there was agreement in terms of time as well as in number and type of errors marked to complete the task. The results show that there were statistically significant differences between reviewers, although there were also correlations on pairs of reviewers depending on the provenance of the text analysed. Reviewers tended to agree on the general number of errors found in the No match category but their agreement in Fuzzy and MT match was either weak or there was no agreement, perhaps indicating that the origin of the text might have influenced their evaluation. The reviewers also tended to agree on best and worst performers, but there was great disparity in the translators’ classifications if they were ranked according to the number of errors.
Chapters in this book
- Prelim pages i
- Table of contents v
- Introduction 1
-
Part I. Cognitive processes in reading during translation
- Chapter 1. Reading for translation 17
- Chapter 2. Four fundamental types of reading during translation 55
-
Part II. Literality, directionality and intralingual translation processes
- Chapter 3. Measuring translation literality 81
- Chapter 4. Translation, post-editing and directionality 107
- Chapter 5. Intralingual and interlingual translation 135
-
Part III. Computing and assessing translation effort, performance, and quality
- Chapter 6. From process to product 161
- Chapter 7. Quality is in the eyes of the reviewer 187
- Chapter 8. Translation technology and learner performance 207
- Notes on contributors 235
- Index 241
Chapters in this book
- Prelim pages i
- Table of contents v
- Introduction 1
-
Part I. Cognitive processes in reading during translation
- Chapter 1. Reading for translation 17
- Chapter 2. Four fundamental types of reading during translation 55
-
Part II. Literality, directionality and intralingual translation processes
- Chapter 3. Measuring translation literality 81
- Chapter 4. Translation, post-editing and directionality 107
- Chapter 5. Intralingual and interlingual translation 135
-
Part III. Computing and assessing translation effort, performance, and quality
- Chapter 6. From process to product 161
- Chapter 7. Quality is in the eyes of the reviewer 187
- Chapter 8. Translation technology and learner performance 207
- Notes on contributors 235
- Index 241