no code implementations • ACL 2022 • Amit Seker, Elron Bandel, Dan Bareket, Idan Brusilovsky, Refael Greenfeld, Reut Tsarfaty
First, so far, Hebrew resources for training large language models are not of the same magnitude as their English counterparts.
1 code implementation • 25 Jan 2024 • Elron Bandel, Yotam Perlitz, Elad Venezian, Roni Friedman-Melamed, Ofir Arviv, Matan Orbach, Shachar Don-Yehyia, Dafna Sheinwald, Ariel Gera, Leshem Choshen, Michal Shmueli-Scheuer, Yoav Katz
In the dynamic landscape of generative NLP, traditional text processing pipelines limit research flexibility and reproducibility, as they are tailored to specific dataset, task, and model combinations.
no code implementations • 22 Aug 2023 • Yotam Perlitz, Elron Bandel, Ariel Gera, Ofir Arviv, Liat Ein-Dor, Eyal Shnarch, Noam Slonim, Michal Shmueli-Scheuer, Leshem Choshen
The increasing versatility of language models (LMs) has given rise to a new class of benchmarks that comprehensively assess a broad range of capabilities.
no code implementations • 20 Dec 2022 • Elron Bandel, Yoav Katz, Noam Slonim, Liat Ein-Dor
We offer our protocol as a simple yet strong baseline for works that wish to make incremental advancements in the field of attribute controlled text rewriting.
1 code implementation • 23 Oct 2022 • Elron Bandel, Yoav Goldberg, Yanai Elazar
While fine-tuned language models perform well on many tasks, they were also shown to rely on superficial surface features such as lexical overlap.
1 code implementation • ACL 2022 • Elron Bandel, Ranit Aharonov, Michal Shmueli-Scheuer, Ilya Shnayderman, Noam Slonim, Liat Ein-Dor
Furthermore, we suggest a method that given a sentence, identifies points in the quality control space that are expected to yield optimal generated paraphrases.
2 code implementations • 8 Apr 2021 • Amit Seker, Elron Bandel, Dan Bareket, Idan Brusilovsky, Refael Shaked Greenfeld, Reut Tsarfaty
Second, there are no accepted tasks and benchmarks to evaluate the progress of Hebrew PLMs on.