2 code implementations • 31 Oct 2022 • Shira Guskin, Moshe Wasserblat, Chang Wang, Haihao Shen
Our quantized length-adaptive MiniLM model (QuaLA-MiniLM) is trained only once, dynamically fits any inference scenario, and achieves an accuracy-efficiency trade-off superior to any other efficient approaches per any computational budget on the SQuAD1. 1 dataset (up to x8. 8 speedup with <1% accuracy loss).
no code implementations • 18 Nov 2021 • Shira Guskin, Moshe Wasserblat, Ke Ding, Gyuwan Kim
Additionally, a separate model must be trained for each inference scenario with its distinct computational budget.
no code implementations • 14 Oct 2019 • Peter Izsak, Shira Guskin, Moshe Wasserblat
In this work-in-progress we combined the effectiveness of transfer learning provided by pre-trained masked language models with a semi-supervised approach to train a fast and compact model using labeled and unlabeled examples.
no code implementations • EMNLP 2018 • Jonathan Mamou, Oren Pereg, Moshe Wasserblat, Alon Eirew, Yael Green, Shira Guskin, Peter Izsak, Daniel Korat
We present SetExpander, a corpus-based system for expanding a seed set of terms into amore complete set of terms that belong to the same semantic class.
no code implementations • COLING 2018 • Jonathan Mamou, Oren Pereg, Moshe Wasserblat, Ido Dagan, Yoav Goldberg, Alon Eirew, Yael Green, Shira Guskin, Peter Izsak, Daniel Korat
We present SetExpander, a corpus-based system for expanding a seed set of terms into a more complete set of terms that belong to the same semantic class.
no code implementations • 26 Jul 2018 • Jonathan Mamou, Oren Pereg, Moshe Wasserblat, Ido Dagan, Yoav Goldberg, Alon Eirew, Yael Green, Shira Guskin, Peter Izsak, Daniel Korat
We present SetExpander, a corpus-based system for expanding a seed set of terms into a more complete set of terms that belong to the same semantic class.