Document-level human evaluation of Machine Translation (MT) has been raising interest in the community. However, little is known about the issues of using document-level methodologies to assess MT quality. This presentation will explore what has been done so far in document-level human MT evaluation, touching the issues of inter-annotator agreement, effort and misevaluation.
Sheila Castilho graduated in Linguistics. She holds a joint Master in Natural Language Processing from the University of Wolverhampton –UK and the University of Algarve – PT, and currently, she is an Irish Research Council Research Fellow at the Adapt Centre, doing her research on Machine Translation evaluation in the school of computing, in DCU. She has authored several journal articles and book chapters on translation technology, post-editing of machine translation, user evaluation of machine translation, and translators’ perception of machine translation. Her research interests include machine translation, post-editing, machine and human translation evaluation, document-level machine translation, usability, and translation technologies.