no code implementations • EMNLP (sustainlp) 2020 • Moshe Wasserblat, Oren Pereg, Peter Izsak
We also show that the distillation of large pre-trained models is more effective in real-life scenarios where limited amounts of labeled training are available.
no code implementations • 16 Apr 2024 • Moshe Berchansky, Daniel Fleischer, Moshe Wasserblat, Peter Izsak
This approach focuses the reasoning process on generating an attribution-centric output.
1 code implementation • 20 Oct 2023 • Moshe Berchansky, Peter Izsak, Avi Caciularu, Ido Dagan, Moshe Wasserblat
Fusion-in-Decoder (FiD) is an effective retrieval-augmented language model applied across a variety of open-domain tasks, such as question answering, fact checking, etc.
1 code implementation • 30 Mar 2022 • Adi Haviv, Ori Ram, Ofir Press, Peter Izsak, Omer Levy
Causal transformer language models (LMs), such as GPT-3, typically require some form of positional encoding, such as positional embeddings.
4 code implementations • EMNLP 2021 • Peter Izsak, Moshe Berchansky, Omer Levy
While large language models a la BERT are used ubiquitously in NLP, pretraining them is considered a luxury that only a few well-funded industry labs can afford.
Ranked #19 on Question Answering on Quora Question Pairs
no code implementations • 14 Oct 2019 • Peter Izsak, Shira Guskin, Moshe Wasserblat
In this work-in-progress we combined the effectiveness of transfer learning provided by pre-trained masked language models with a semi-supervised approach to train a fast and compact model using labeled and unlabeled examples.
5 code implementations • 14 Oct 2019 • Ofir Zafrir, Guy Boudoukh, Peter Izsak, Moshe Wasserblat
Recently, pre-trained Transformer based language models such as BERT and GPT, have shown great improvement in many Natural Language Processing (NLP) tasks.
Ranked #13 on Semantic Textual Similarity on STS Benchmark
no code implementations • EMNLP 2018 • Jonathan Mamou, Oren Pereg, Moshe Wasserblat, Alon Eirew, Yael Green, Shira Guskin, Peter Izsak, Daniel Korat
We present SetExpander, a corpus-based system for expanding a seed set of terms into amore complete set of terms that belong to the same semantic class.
no code implementations • COLING 2018 • Jonathan Mamou, Oren Pereg, Moshe Wasserblat, Ido Dagan, Yoav Goldberg, Alon Eirew, Yael Green, Shira Guskin, Peter Izsak, Daniel Korat
We present SetExpander, a corpus-based system for expanding a seed set of terms into a more complete set of terms that belong to the same semantic class.
no code implementations • 26 Jul 2018 • Jonathan Mamou, Oren Pereg, Moshe Wasserblat, Ido Dagan, Yoav Goldberg, Alon Eirew, Yael Green, Shira Guskin, Peter Izsak, Daniel Korat
We present SetExpander, a corpus-based system for expanding a seed set of terms into a more complete set of terms that belong to the same semantic class.