1 code implementation • 12 Feb 2024 • Noam Razin, Yotam Alexander, Edo Cohen-Karlik, Raja Giryes, Amir Globerson, Nadav Cohen
This paper theoretically studies the implicit bias of policy gradient in terms of extrapolation to unseen initial states.
no code implementations • 4 Feb 2024 • Edo Cohen-Karlik, Eyal Rozenberg, Daniel Freedman
Graph generation is a fundamental problem in various domains, including chemistry and social networks.
no code implementations • 25 Oct 2022 • Edo Cohen-Karlik, Itamar Menuhin-Gruman, Raja Giryes, Nadav Cohen, Amir Globerson
Overparameterization in deep learning typically refers to settings where a trained neural network (NN) has representational capacity to fit the training data in many ways, some of which generalize well, while others do not.
no code implementations • 9 Feb 2022 • Edo Cohen-Karlik, Avichai Ben David, Nadav Cohen, Amir Globerson
When using recurrent neural networks (RNNs) it is common practice to apply trained models to sequences longer than those seen in training.
no code implementations • NeurIPS 2020 • Edo Cohen-Karlik, Avichai Ben David, Amir Globerson
We show that RNNs can be regularized towards permutation invariance, and that this can result in compact models, as compared to non-recurrent architectures.
no code implementations • Findings of the Association for Computational Linguistics 2020 • Shai Gretz, Yonatan Bilu, Edo Cohen-Karlik, Noam Slonim
Argument generation is a challenging task whose research is timely considering its potential impact on social media and the dissemination of information.
2 code implementations • 26 Nov 2019 • Shai Gretz, Roni Friedman, Edo Cohen-Karlik, Assaf Toledo, Dan Lahav, Ranit Aharonov, Noam Slonim
To this end, we created a corpus of 30, 497 arguments carefully annotated for point-wise quality, released as part of this work.
no code implementations • IJCNLP 2019 • Assaf Toledo, Shai Gretz, Edo Cohen-Karlik, Roni Friedman, Elad Venezian, Dan Lahav, Michal Jacovi, Ranit Aharonov, Noam Slonim
In spite of the inherent subjective nature of the task, both annotation schemes led to surprisingly consistent results.
no code implementations • 25 Sep 2019 • Edo Cohen-Karlik, Amir Globerson
Many machine learning tasks involve analysis of set valued inputs, and thus the learned functions are expected to be permutation invariant.
no code implementations • 3 Sep 2019 • Assaf Toledo, Shai Gretz, Edo Cohen-Karlik, Roni Friedman, Elad Venezian, Dan Lahav, Michal Jacovi, Ranit Aharonov, Noam Slonim
In spite of the inherent subjective nature of the task, both annotation schemes led to surprisingly consistent results.
1 code implementation • WS 2019 • Yoav Kantor, Yoav Katz, Leshem Choshen, Edo Cohen-Karlik, Naftali Liberman, Assaf Toledo, Amir Menczel, Noam Slonim
We also present a spellchecker created for this task which outperforms standard spellcheckers tested on the task of spellchecking.
Ranked #8 on Grammatical Error Correction on BEA-2019 (test)