Category Archives: Seminars 2021

Technologies for Translation and Interpreting: Challenges and Latest Developments

Dr Joss Moorkens, Dublin City University.

Digital Taylorism in the Translation Industry

23 July 2021


Translators have worked with the assistance of computers for many years, usually translating whole texts, divided into segments but in sequential order. In order to maximise efficiency and inspired by similar moves in the tech industry and predictions for Industry 4.0, large translation companies have begun to break tasks down into smaller chunks and to rigidly define and monitor translation processes. This is particularly true of platform-mediated work, highly collaborative workflows, and multimedia work that requires near-live turnaround times. This article considers such workflows in the context of measures of job satisfaction and discussion of sustainable work systems, proposing that companies prioritise long-term returns and attempt to balance the needs of all stakeholders in a translation process. Translators and translator trainers also have a role to play in achieving this balance.


Joss Moorkens is an Associate Professor and Chair of postgraduate translation programmes at the School of Applied Language and Intercultural Studies at Dublin City University. He is also a Funded Investigator with the ADAPT Centre and a member the Centre for Translation and Textual Studies. He has authored over 50 journal articles, book chapters, and conference papers on translation technology, user interaction with and evaluation of machine translation, translator precarity, and translation ethics. He is General Coeditor of the journal Translation Spaces with Prof. Dorothy Kenny, and coedited the book ‘Translation Quality Assessment: From Principles to Practice’, published in 2018 by Springer, and special issues of Machine Translation (2019) and Translation Spaces (2020). He leads the Technology working group (with Prof. Tomas Svoboda of Charles University) as a board member of the European Masters in Translation network and sits on the advisory board of the Journal of Specialised Translation.

Technologies for Translation and Interpreting: Challenges and Latest Developments

Prof. Barry Olsen, The Middlebury Institute of International Studies

RSI has taken the world by storm. So, what have we learned and where do we go from here?

16 July 2021


No one could have foreseen the effects of the COVID-19 pandemic on the interpreting profession or its accompanying effects on the adoption rate of remote simultaneous interpretation (RSI) all over the world. In a question of weeks, international organizations, national governments, non-governmental organizations, and private corporations were meeting, negotiating, and conducting business online at a scale never seen before, often in multiple languages. But this abrupt adoption of web conferencing with RSI was not entirely smooth or without its challenges. We are now at a stage where we can compile a list of lessons learned during this unprecedented shift in professional practice and turn our sights toward the future to address the new digital world of multilingual communication and interpretation technology’s place in it.  This presentation will share some of those lessons learned and some thoughts about what the future of RSI may hold.


Barry Slaughter Olsen is a veteran conference interpreter and technophile with over twenty-five years of experience interpreting, training interpreters, and organizing language services. He is a professor at the Middlebury Institute of International Studies at Monterey (MIIS) and the Vice-President of Client Success at KUDO, a multilingual web conferencing platform. He was co-president of InterpretAmerica from 2009 to 2020. A pioneer in the field of remote simultaneous interpretation (RSI), he is co-inventor on two patents on RSI technologies. He is a member of the International Association of Conference Interpreters (AIIC). Barry has been interviewed numerous times by international media (CNN, CBC, MSNBC, NPR, and PBS) about interpreting and translation. For updates on interpreting, technology, and training, follow him on Twitter @ProfessorOlsen.

Technologies for Translation and Interpreting: Challenges and Latest Developments

Prof Ruslan Mitkov, University of Wolverhampton

What does the future hold for humans, computers, translators, and interpreters?

A non-clairvoyant’s view.

22 July  2021

(60-min introduction to Natural Language Processing)

Abstract:  Computers are ubiquitous – they can be found and used everywhere. But how good are computers at understanding, producing, and translating natural languages? In other words, what is the level of their linguistic intelligence? This presentation will examine the linguistic intelligence of computers and will ask the question of how far advances in Artificial Intelligence (AI) can go. Illustrations will be provided through key applications addressing parts of the translation process such as machine translation and translation memory systems and the challenges ahead will be commented on …

The presentation begins with a brief historical flashback, plotting the timeline of the linguistic intelligence of computers against that of humans. It then gives another snapshot in time depicting early work on Machine Translation. Over the last 20 years, as will be discussed in the presentation, advances in Natural Language Processing (NLP) have significantly increased the linguistic intelligence of computers but this intelligence still lags behind that of humans.

The presentation will go on to explain why it is so difficult for computers to understand, translate and, in general, to process natural languages; it is a steep road, and a long and winding one, for both computers and researchers. The talk will briefly present well-established NLP techniques that computers use when ‘learning’ to speak our languages, including initial rule-based and knowledge-based methods and more recent machine learning as well as deep learning methods, which are regarded as highly promising. A selection of Natural Language Processing applications will be outlined after that. In particular, the talk will look at the recent advances in Machine Translation and will assess the claims that Neural Machine Translation has reached parity with human translation.

The speaker will express his views on the potential of MT, and the latest research on ‘intelligent’ Translation Memory systems will be outlined along with expected developments. The future of Interpreting Technology and its impact on interpreters will also be touched on.

I am no clairvoyant, but during my plenary talks I am often asked to predict how far computers will go in their ability to learn and translate language. At the end of my presentation I shall share with you my predictions and, in general, my vision for the future of translation and interpreting technologies. These predictions, though tentative, will be relevant to the impact that AI advances can have on the work of translators and interpreters in the future.

Speaker’s bio: Prof Dr Ruslan Mitkov has been working in Natural Language Processing (NLP), Computational Linguistics, Corpus Linguistics, Machine Translation, Translation Technology and related areas since the early 1980s. Whereas Prof Mitkov is best known for his seminal contributions to the areas of anaphora resolution and automatic generation of multiple-choice tests, his extensively cited research (more than 250 publications including 16 books, 32 journal articles and 37 book chapters) also covers topics such as machine translation, translation memory and translation technology in general, bilingual term extraction, automatic identification of cognates and false friends, natural language generation, automatic summarisation, computer-aided language processing, centering, evaluation, corpus annotation, NLP-driven corpus-based study of translation universals, text simplification, NLP for people with language disorders and more recently – computational phraseology. Mitkov is author of the monograph Anaphora resolution (Longman) and Editor of the most successful Oxford University Press Handbook – The Oxford Handbook of Computational Linguistics. Current prestigious projects include his role as Executive Editor of the Journal of Natural Language Engineering published by Cambridge University Press and Editor-in-Chief of the Natural Language Processing book series of John Benjamins publishers. Dr Mitkov is also working on the forthcoming Oxford Dictionary of Computational Linguistics (Oxford University Press, co-authored with Patrick Hanks) and the forthcoming second, substantially revised edition of the Oxford Handbook of Computational Linguistics.

Prof Mitkov has been invited as a keynote speaker at a number of international conferences. He has acted as Programme Chair of various international conferences on Natural Language Processing (NLP), Machine Translation, Translation Technology, Translation Studies, Corpus Linguistics and Anaphora Resolution. He is asked on a regular basis to review for leading international funding bodies and organisations and to act as a referee for applications for Professorships both in North America and Europe. Ruslan Mitkov is regularly asked to review for leading journals, publishers and conferences and serve as a member of Programme Committees or Editorial Boards. Prof Mitkov has been an external examiner of many doctoral theses and curricula in the UK and abroad, including Master’s programmes related to NLP, Translation and Translation Technology. Dr Mitkov has considerable external funding to his credit (more than є 20,000,000) and is currently acting as Principal Investigator of several large projects, some of which are funded by UK research councils, by the EC as well as by companies and users from the UK and USA.

Ruslan Mitkov received his MSc from the Humboldt University in Berlin, his PhD from the Technical University in Dresden and worked as a Research Professor at the Institute of Mathematics, Bulgarian Academy of Sciences, Sofia. Mitkov is Professor of Computational Linguistics and Language Engineering at the University of Wolverhampton which he joined in 1995 and where he set up the Research Group in Computational Linguistics. His Research Group has emerged as an internationally leading unit in applied Natural Language Processing and members of the group have won awards in different NLP/shared-task competitions. In addition to being Head of the Research Group in Computational Linguistics, Prof Mitkov is also Director of the Research Institute in Information and Language Processing and Director of the Responsible Digital Humanities Lab. The Research Institute consists of the Research Group in Computational Linguistics and the Research Group in Statistical Cybermetrics, which is another top performer internationally. Ruslan Mitkov is Vice President of ASLING, an international Association for promoting Language Technology. Dr Mitkov is a Fellow of the Alexander von Humboldt Foundation, Germany, was a Marie Curie Fellow, Distinguished Visiting Professor at the University of Franche-Comté in Besançon, France and Distinguished Visiting Researcher at the University of Malaga, Spain; he also serves/has served as Vice-Chair for the prestigious EC funding programmes ‘Future and Emerging Technologies’ and ‘EIC Pathfinder Open’. In recognition of his outstanding professional/research achievements, Prof Mitkov was awarded the title of Doctor Honoris Causa at Plovdiv University in November 2011. At the end of October 2014 Dr Mitkov was also conferred Professor Honoris Causa at Veliko Tarnovo University.

Digital Humanities

Dr Ahmed Omer, XTM International

5 July 2021

Title: Computational Stylometry of Arabic Literature


The successful implementation of stylometric methods with English texts has motivated researchers who work with the Arabic texts to investigate whether they can use these methods in the Arabic language as well. Taking into account the different characteristics of the Arabic language, the main aim of my study is to investigate what are the most useful linguistic features to enable the authorship attribution task to be accomplished for Arabic texts. As well as using features derived from English studies of author attribution, I developed a number of feature sets derived from Arabic linguistic theory, namely Arud, Nazm and Wazn. The feature sets were compared on two corpora of travelogues, one in English and one in Arabic. The feature sets were examined in conjunction with agglomerative clustering methods and traditional machine learning classifiers including SVM, Naïve Bayes, and KNN, as well as a Deep Learning model implemented using the open source package Keras. The findings from this first part of the thesis were used to examine six real-life case studies from Arabic, two of Authorship Attribution, two on Author Profiling, and two on Authorship Verification. These case studies respectively were:

· Was Al-Qarni’s “Don’t Despair” plagiarised from Salwa?

· Did Abdu or Amin write certain key chapters of “Women’s Rights”?

· Were the “Hanging Poems” pre-Islamic or more recent?

· A study of the dialectology of Arabic speech.

· Was a box of posthumous texts by the Nobel prize winner Naguib Mahfouz indeed by him?

· Were some texts written by the Mediaeval scholar Al-Ghazali by him or by somebody else?


Ahmed Omer has an M.Sc. in Computer Science from Napier University in Edinburgh and a Ph.D. in Computational Linguistics from the University of Wolverhampton. He is now working at XTM International as a Computational Linguistics Expert. The company is working in Machine Translation and they use the Inter-language vector space method. This interesting method has been used by Google and recently by Facebook to enforce their polices and to translate texts for customers in their platform.

Digital Humanities

Prof. Dr. Frederik Truyen, KU Leuven, Belgium.

28 June 2021

Title: Digitization of Heritage Collections: from inside-out to outside-in: the many facets of digital transitions.


In this talk we will address the challenges of digital transformation in the Cultural Heritage sector, starting from the example of digitizing photographic collections for Europeana. We will discuss how, starting from the actual digitization process of selected collections, a series of workflow transitions is inevitably set in motion that has a transformational impact, not only  on the way GLAM institutions operate, but how they rethink their core mission and their fundamental relationship with their audiences. We will highlight both technical as well as organizational and management challenges, and how this reveals the place and contribution of digital humanities research. 


Fred Truyen is professor at the Faculty of Arts, KU Leuven. He publishes on Digitization of Heritage, Photography and E-Learning in the Humanities. He is in charge of the mediaLab CS Digital. He was involved in many projects on digitization of Cultural Heritage, such as EuropeanaPhotography (coordinator), Europeana Space (pilot leader) the Europeana DSI (aggregator for photography). Currently he is involved in the KU Leuven/FWO funded project Cornelia – a database for 17th century Art industries – and in the CEF Generic call for Europeana with the projects on Migration in the Arts and Sciences, Kaleidoscope: the 1950s in Europe, Europeana Common Culture and currently Europeana: Century of Change. He also participates in the H2020 projects Detect: Detecting Transcultural Identity in European Popular Crime Narratives and Indices. Moreover, he has a large experience in data modelling and metadata development for Image databases in the cultural-historical field. His main research focus is the digital transformation roadmap for Cultural Heritage Institutions. Prof. Truyen teaches the courses Online Publishing and Digital Cultural heritage in the MA Cultural Studies and the MA Digital Humanities at KU Leuven. He co-teaches in Cultural Economics and Cultural Policy. Prof. Truyen is board member of the Europeana Network Association and is active in the field of European policies on Digitization of Cultural Heritage. He is also a member of CLARIAH Flanders. He is the president of Photoconsortium, an association for the safeguard and promotion of photographic heritage.

Technologies for Translation and Interpreting: Challenges and Latest Developments

Prof. Jan-Louis Kruger, Macquarie University.

18 June 2021

Title: Studying subtitle reading using eye tracking


The world of audiovisual media has changed on a scale last seen with the shift away from print to digital photography. VOD has moved from an expensive concept limited by technology and bandwidth, to the norm in most of not only the developed world, but also as an accelerated equaliser in developing countries. This has increased the reach and potential of audiovisual translation.

While the skills required to create AVT have come within reach of a large groups of practitioners due to advances in editing software and technology, with many processes from transcription to cuing being automated, research on the reception and processing of multimodal texts has also developed rapidly. This has given us new insights into the way viewers, for example, process the text of subtitles while also attending to auditory input as well as the rich visual code of film. This multimodality of film, although being acknowledged as one of the unique qualities of translation in this context, is also often overlooked in technological advances. When the emphasis is on the cheapest and simplest way of transferring spoken dialogue to written text, or visual scenes to auditory descriptions, the complex interplay between language and other signs is often overlooked.

Eye tracking provides a powerful tool for investigating the cognitive processing of viewers when watching subtitled film with research in this area drawing on cognitive science, psycholinguistics and psychology. I will present a brief description of eye tracking in AVT as well as the findings of some recent studies on subtitle reading at different subtitle presentation rates as well as in the presence of secondary visual tasks.


Jan-Louis Kruger is professor and Head of the Department of Linguistics at Macquarie University. He started his research career in English literature with a particular interest in the way in which Modernist poets and novelists manipulate language, and in the construction of narrative point of view. From there he started exploring the creation of narrative in film and how audiovisual translation (subtitling and audio description) facilitates the immersion of audiences in the fictional reality of film.

In the past decade his attention has shifted to the multimodal integration of language in video where auditory and visual sources of information supplement and compete with text in the processing of subtitles. His research uses eye tracking experiments (combined with psychometric instruments and performance measures) to investigate the cognitive processing of language in multimodal contexts. His current work looks at the impact of redundant and competing sources of information on the reading of subtitles at different presentation rates and in the presence of different languages.