no code implementations • RANLP 2021 • Farane Jalali Farahani, Gholamreza Ghassem-Sani
Besides, as the very first effort to try active learning in the Persian NER, using only 30% of Arman and 20% of Peyma, we respectively achieved 92. 15%, and 92. 41% performance of the mentioned supervised learning experiments.
no code implementations • 8 Mar 2024 • Seyed Parsa Neshaei, Yasaman Boreshban, Gholamreza Ghassem-Sani, Seyed Abolghasem Mirroshandel
In this paper, we explore the effect of quantization on the robustness of Transformer-based models.
1 code implementation • 26 Sep 2021 • Yasaman Boreshban, Seyed Morteza Mirbostani, Gholamreza Ghassem-Sani, Seyed Abolghasem Mirroshandel, Shahin Amiriparian
Contemporary question answering (QA) systems, including transformer-based architectures, suffer from increasing computational and model complexity which render them inefficient for real-world applications with limited resources.
1 code implementation • 13 Nov 2019 • Behnam Sabeti, Pedram Hosseini, Gholamreza Ghassem-Sani, Seyed Abolghasem Mirroshandel
The results show an acceptable performance in terms of accuracy and F-measure in the generated sentiment lexicon.
no code implementations • 23 Jan 2014 • Seyed Abolghasem Mirroshandel, Gholamreza Ghassem-Sani
We show that by combining the global information of such a cluster with local decisions of a general classifier, a bootstrapping cross-document classifier can be built to extract temporal relations between events.