Ahmet Üstün, University of Groningen, Netherlands
A Single Model for Many Languages with Adapters
25 January 2022
Abstract:
Recent advances in pre-trained language models have brought the idea of truly multilingual models for many languages, in different tasks. However, cross-language interference and restrained model capacity, i.e. curse of multilinguality, remain as the major obstacle especially for zero or low resource languages. Adapters (Houlsby et al., 2019) that are small bottleneck layers inserted into Transformer models, enable modular and efficient transfer learning. They can also be purposed as a solution to the curse of multilinguality. In this talk, I will discuss how to use adapters to build a single model for many languages including zero-shot and unsupervised scenarios in dependency parsing and neural machine translation respectively.
Bio:
Ahmet Ustun is a PhD Student in the Center for Language and Cognition (CLCG) at the University of Groningen. He is working as a member of the Computational Linguistics research group under the supervision of Arianna Bisazza, Gosse Bouma and Gertjan van Noord. His research focuses on multilingual natural language processing with a special interest in cross-lingual transfer learning. In this context, he worked on cross-lingual word embeddings, multilingual dependency parsing and multilingual unsupervised NMT. His research aim is to find efficient multilingual adaptation methods for low-resource languages without suffering the curse of multilinguality.
=====
Website: