no code implementations • WMT (EMNLP) 2021 • Viktor Hangya, Qianchu Liu, Dario Stojanovski, Alexander Fraser, Anna Korhonen
The performance of NMT systems has improved drastically in the past few years but the translation of multi-sense words still poses a challenge.
no code implementations • 30 Sep 2022 • Alexandra Chronopoulou, Dario Stojanovski, Alexander Fraser
Training a new adapter on each language pair or training a single adapter on all language pairs without updating the pretrained model has been proposed as a parameter-efficient alternative.
1 code implementation • NAACL 2021 • Alexandra Chronopoulou, Dario Stojanovski, Alexander Fraser
Successful methods for unsupervised neural machine translation (UNMT) employ crosslingual pretraining via self-supervision, often in the form of a masked language modeling or a sequence generation task, which requires the model to align the lexical- and high-level representations of the two languages.
no code implementations • COLING 2020 • Dario Stojanovski, Benno Krojer, Denis Peskov, Alexander Fraser
Recent high scores on pronoun translation using context-aware neural machine translation have suggested that current approaches work well.
1 code implementation • WMT (EMNLP) 2020 • Alexandra Chronopoulou, Dario Stojanovski, Viktor Hangya, Alexander Fraser
Our core unsupervised neural machine translation (UNMT) system follows the strategy of Chronopoulou et al. (2020), using a monolingual pretrained language generation model (on German) and fine-tuning it on both German and Upper Sorbian, before initializing a UNMT model, which is trained with online backtranslation.
1 code implementation • EMNLP 2020 • Alexandra Chronopoulou, Dario Stojanovski, Alexander Fraser
Using a language model (LM) pretrained on two languages with large monolingual data in order to initialize an unsupervised neural machine translation (UNMT) system yields state-of-the-art results.
no code implementations • EACL (AdaptNLP) 2021 • Dario Stojanovski, Alexander Fraser
Achieving satisfying performance in machine translation on domains for which there is no training data is challenging.
no code implementations • WS 2019 • Dario Stojanovski, Alex Fraser, er
We describe LMU Munich{'}s machine translation system for English→German translation which was used to participate in the WMT19 shared task on supervised news translation.
no code implementations • WS 2019 • Dario Stojanovski, Viktor Hangya, Matthias Huck, Alex Fraser, er
We describe LMU Munich{'}s machine translation system for German→Czech translation which was used to participate in the WMT19 shared task on unsupervised news translation.
no code implementations • WS 2018 • Dario Stojanovski, Alex Fraser, er
We show that NMT models taking advantage of context oracle signals can achieve considerable gains in BLEU, of up to 7. 02 BLEU for coreference and 1. 89 BLEU for coherence on subtitles translation.
no code implementations • WS 2018 • Matthias Huck, Dario Stojanovski, Viktor Hangya, Alex Fraser, er
The systems were used for our participation in the WMT18 biomedical translation task and in the shared task on machine translation of news.
no code implementations • WS 2018 • Dario Stojanovski, Viktor Hangya, Matthias Huck, Alex Fraser, er
We describe LMU Munich{'}s unsupervised machine translation systems for English↔German translation.