no code implementations • EMNLP (BlackboxNLP) 2021 • Mohsen Fayyaz, Ehsan Aghazadeh, Ali Modarressi, Hosein Mohebbi, Mohammad Taher Pilehvar
Most of the recent works on probing representations have focused on BERT, with the presumption that the findings might be similar to the other models.
no code implementations • 17 Apr 2024 • Ali Modarressi, Abdullatif Köksal, Ayyoob Imani, Mohsen Fayyaz, Hinrich Schütze
While current large language models (LLMs) demonstrate some capabilities in knowledge-intensive tasks, they are limited by relying on their parameters as an implicit storage mechanism.
1 code implementation • 5 Jun 2023 • Ali Modarressi, Mohsen Fayyaz, Ehsan Aghazadeh, Yadollah Yaghoobzadeh, Mohammad Taher Pilehvar
An emerging solution for explaining Transformer-based models is to use vector-based analysis on how the representations are formed.
1 code implementation • 23 May 2023 • Ali Modarressi, Ayyoob Imani, Mohsen Fayyaz, Hinrich Schütze
Large language models (LLMs) have significantly advanced the field of natural language processing (NLP) through their extensive parameters and comprehensive data utilization.
1 code implementation • 6 Feb 2023 • Ali Modarressi, Hossein Amirkhani, Mohammad Taher Pilehvar
A popular workaround is to train a robust model by re-weighting training examples based on a secondary biased model.
no code implementations • 10 Nov 2022 • Mohsen Fayyaz, Ehsan Aghazadeh, Ali Modarressi, Mohammad Taher Pilehvar, Yadollah Yaghoobzadeh, Samira Ebrahimi Kahou
In this work, we employ these two metrics for the first time in NLP.
1 code implementation • NAACL 2022 • Ali Modarressi, Mohsen Fayyaz, Yadollah Yaghoobzadeh, Mohammad Taher Pilehvar
There has been a growing interest in interpreting the underlying dynamics of Transformers.
1 code implementation • ACL 2022 • Ali Modarressi, Hosein Mohebbi, Mohammad Taher Pilehvar
To determine the importance of each token representation, we train a Contribution Predictor for each layer using a gradient-based saliency method.
no code implementations • 13 Sep 2021 • Mohsen Fayyaz, Ehsan Aghazadeh, Ali Modarressi, Hosein Mohebbi, Mohammad Taher Pilehvar
Most of the recent works on probing representations have focused on BERT, with the presumption that the findings might be similar to the other models.
1 code implementation • EMNLP 2021 • Hosein Mohebbi, Ali Modarressi, Mohammad Taher Pilehvar
Several studies have been carried out on revealing linguistic features captured by BERT.