no code implementations • FNP (LREC) 2022 • Negar Foroutan, Angelika Romanou, Stéphane Massonnet, Rémi Lebret, Karl Aberer
The language models were fine-tuned on a financial document collection of three languages (English, Spanish, and Greek) and aim to identify the beginning of the summary narrative part of the document.
1 code implementation • 23 Oct 2023 • Negar Foroutan, Mohammadreza Banaei, Karl Aberer, Antoine Bosselut
We evaluate the cross-lingual reasoning abilities of MultiLMs in two schemes: (1) where the language of the context and the question remain the same in the new languages that are tested (i. e., the reasoning is still monolingual, but the model must transfer the learned reasoning ability across languages), and (2) where the language of the context and the question is different (which we term code-switched reasoning).
no code implementations • 4 Oct 2023 • Deniz Bayazit, Negar Foroutan, Zeming Chen, Gail Weiss, Antoine Bosselut
In this work, we investigate whether pretrained language models contain various knowledge-critical subnetworks: particular sparse computational subgraphs responsible for encoding specific knowledge the model has memorized.
1 code implementation • 29 Jun 2023 • Yasmine Karoui, Rémi Lebret, Negar Foroutan, Karl Aberer
Our evaluation across three distinct tasks (image-text retrieval, visual entailment, and natural language visual reasoning) demonstrates that this approach outperforms the state-of-the-art multilingual vision-language models without requiring large parallel corpora.
1 code implementation • 25 May 2022 • Negar Foroutan, Mohammadreza Banaei, Remi Lebret, Antoine Bosselut, Karl Aberer
Multilingual pre-trained language models transfer remarkably well on cross-lingual downstream tasks.