no code implementations • 25 Mar 2024 • Aviv Slobodkin, Eran Hirsch, Arie Cattan, Tal Schuster, Ido Dagan
Recent efforts to address hallucinations in Large Language Models (LLMs) have focused on attributed text generation, which supplements generated texts with citations of supporting sources for post-generation fact-checking and corrections.
no code implementations • 22 Mar 2024 • Aviv Slobodkin, Ori Shapira, Ran Levy, Ido Dagan
This study lays the groundwork for further exploration of modular text generation in the multi-document setting, offering potential improvements in the quality and reliability of generated content.
1 code implementation • 18 Oct 2023 • Aviv Slobodkin, Omer Goldman, Avi Caciularu, Ido Dagan, Shauli Ravfogel
In this paper, we explore the behavior of LLMs when presented with (un)answerable queries.
1 code implementation • 13 Oct 2023 • Aviv Slobodkin, Avi Caciularu, Eran Hirsch, Ido Dagan
Further, we substantially improve the silver training data quality via GPT-4 distillation.
no code implementations • 16 Aug 2023 • Aviv Slobodkin, Niv Nachum, Shmuel Amar, Ori Shapira, Ido Dagan
Current approaches for text summarization are predominantly automatic, with rather limited space for human intervention and control over the process.
2 code implementations • 24 Oct 2022 • Aviv Slobodkin, Paul Roit, Eran Hirsch, Ori Ernst, Ido Dagan
Producing a reduced version of a source text, as in generic or focused summarization, inherently involves two distinct subtasks: deciding on targeted content and generating a coherent text conveying it.
no code implementations • *SEM (NAACL) 2022 • Aviv Slobodkin, Leshem Choshen, Omri Abend
We further show an additional gain when using both semantic and syntactic structures in some language pairs.
1 code implementation • NAACL 2021 • Aviv Slobodkin, Leshem Choshen, Omri Abend
Probing neural models for the ability to perform downstream tasks using their activation patterns is often used to localize what parts of the network specialize in performing what tasks.