The second call for papers for the 2nd Workshop on Natural Language Processing for Translation Memories (NLP4TM 2016) to be organised in conjunction with LREC 2016 has been distributed. The deadline for paper submission is in 2 week. For more details please visit the workshop’s web page.
- Constantin Orasan (University of Wolverhampton, UK)
- Marcello Federico (FBK, Italy)
Submission deadline: May 15, 2016
1. Call For Papers
Translation Memories (TM) are amongst the most widely used tools by professional translators. The underlying idea of TMs is that a translator should benefit as much as possible from previous translations by being able to retrieve the way in which a similar sentence was translated before. Moreover, the usage of TMs aims to guarantee that new translations follow the client’s specified style and terminology. Despite the fact that the core idea of these systems relies on comparing segments (typically of sentence length) from the document to be translated with segments from previous translations, most of the existing TM systems hardly use any language processing for this. Instead of addressing this issue, most of the work on translation memories focused on improving the user experience by allowing processing of a variety of document formats, intuitive user interfaces, etc. Continue reading
The advanced translation memory tool developed by Rohit Gupta is now available on Github at https://github.com/rohitguptacs/TMAdvanced
Current Translation Memory (TM) systems work at the surface level and lack semantic knowledge while matching. This tool implements an approach to incorporating semantic knowledge in the form of paraphrasing in matching and retrieval. Most of the TMs use Levenshtein edit- distance or some variation of it. This tool implements an efficient approach to incorporating paraphrasing with edit-distance. The approach is based on greedy approximation and dynamic programming. We have obtained significant improvement in both retrieval and translation of retrieved segments. More details about the approach and evaluations given in the following publications:
Approach: Rohit Gupta and Constantin Orasan. 2014. Incorporating Paraphrasing in Translation Memory Matching and Retrieval. In Proceedings of the European Association of Machine Translation (EAMT-2014).
Human Evaluations: Rohit Gupta, Constantin Orasan, Marcos Zampieri, Mihaela Vela and Josef van Genabith. 2015. Can Transfer Memories afford not to use paraphrasing? In Proceeding of EAMT-2015, Antalya Turkey.
The tool was developed part of the EXPERT project.
The 2nd Call for Papers of the Workshop on Natural Language Processing for Translation Memories (NLP4TM) organised at RANLP 2015 by Constantin Orasan and Rohit Gupta has been published. Information about the topics addressed by the workshop and important dates can be found on the workshop’s webpage.
Research carried out in the EXPERT project between researchers from University of Wolverhampton and Saarland University, Germany is being presented at the European Association for Machine Translation 2015 conference. The work shows how paraphrasing can help the task of translators who use translation memories. Continue reading
By Patrick Hanks and Sara Može
Research Institute of Information and Language Processing
University of Wolverhampton
No doubt every politically conscious person in Britain has a pretty good idea by now of the main issues selected by the various political parties fighting each other for votes in the upcoming General Election. An obvious way of finding out what those issues are is to read the manifestos of each of the parties.
But linguistic analysis can tell us more than the politicians ever intended to reveal. Linguists working on the DVC project at the University of Wolverhampton have been using corpus-analysis tools such as Adam Kilgarriff’s Sketch Engine to explore the language used in the manifestos of four parties: Continue reading
The Research Group in Computational Linguistics is happy to announce their new website. Not all the content has been migrated yet, so please bear with us.