1 code implementation • Findings (EMNLP) 2021 • Yusen Zhang, Ansong Ni, Tao Yu, Rui Zhang, Chenguang Zhu, Budhaditya Deb, Asli Celikyilmaz, Ahmed Hassan Awadallah, Dragomir Radev
Dialogue summarization helps readers capture salient information from long conversations in meetings, interviews, and TV series.
1 code implementation • 4 Apr 2024 • Ryo Kamoi, Sarkar Snigdha Sarathi Das, Renze Lou, Jihyun Janice Ahn, Yilun Zhao, Xiaoxin Lu, Nan Zhang, Yusen Zhang, Ranran Haoran Zhang, Sujeeth Reddy Vummanthala, Salika Dave, Shaobo Qin, Arman Cohan, Wenpeng Yin, Rui Zhang
This work introduces ReaLMistake, the first error detection benchmark consisting of objective, realistic, and diverse errors made by LLMs.
no code implementations • 12 Jan 2024 • Yusen Zhang
In conclusion, this paper identifies the lack of a standardized benchmark framework as a current limitation in dynamic graph learning research .
1 code implementation • 14 Nov 2023 • Yusen Zhang, Nan Zhang, Yixin Liu, Alexander Fabbri, Junru Liu, Ryo Kamoi, Xiaoxin Lu, Caiming Xiong, Jieyu Zhao, Dragomir Radev, Kathleen McKeown, Rui Zhang
However, current work in summarization metrics and Large Language Models (LLMs) evaluation has not explored fair abstractive summarization.
1 code implementation • 3 Nov 2023 • Nan Zhang, Yusen Zhang, Wu Guo, Prasenjit Mitra, Rui Zhang
In this paper, we investigate and improve faithfulness in summarization on a broad range of medical summarization tasks.
1 code implementation • 7 Jun 2023 • Yusen Zhang, Jun Wang, Zhiguo Wang, Rui Zhang
However, existing CLSP models are separately proposed and evaluated on datasets of limited tasks and applications, impeding a comprehensive and unified evaluation of CLSP on a diverse range of NLs and MRs. To this end, we present XSemPLR, a unified benchmark for cross-lingual semantic parsing featured with 22 natural languages and 8 meaning representations by examining and selecting 9 existing datasets to cover 5 tasks and 164 domains.
1 code implementation • 9 Nov 2022 • Yusen Zhang, Yang Liu, ZiYi Yang, Yuwei Fang, Yulong Chen, Dragomir Radev, Chenguang Zhu, Michael Zeng, Rui Zhang
We propose two simple and effective parameter-efficient approaches for the new task of mixed controllable summarization based on hard prompt tuning and soft prefix tuning.
1 code implementation • COLING 2022 • Yusen Zhang, Zhongli Li, Qingyu Zhou, Ziyi Liu, Chao Li, Mina Ma, Yunbo Cao, Hongzhi Liu
To automatically correct handwritten assignments, the traditional approach is to use an OCR model to recognize characters and compare them to answers.
2 code implementations • ACL 2022 • Yusen Zhang, Ansong Ni, Ziming Mao, Chen Henry Wu, Chenguang Zhu, Budhaditya Deb, Ahmed H. Awadallah, Dragomir Radev, Rui Zhang
To the best of our knowledge, Summ$^N$ is the first multi-stage split-then-summarize framework for long input summarization.
1 code implementation • ACL 2022 • Ziming Mao, Chen Henry Wu, Ansong Ni, Yusen Zhang, Rui Zhang, Tao Yu, Budhaditya Deb, Chenguang Zhu, Ahmed H. Awadallah, Dragomir Radev
Transformer-based models have achieved state-of-the-art performance on short-input summarization.
1 code implementation • 10 Sep 2021 • Yusen Zhang, Ansong Ni, Tao Yu, Rui Zhang, Chenguang Zhu, Budhaditya Deb, Asli Celikyilmaz, Ahmed Hassan Awadallah, Dragomir Radev
Dialogue summarization helps readers capture salient information from long conversations in meetings, interviews, and TV series.
1 code implementation • EMNLP (ACL) 2021 • Ansong Ni, Zhangir Azerbayev, Mutethia Mutuma, Troy Feng, Yusen Zhang, Tao Yu, Ahmed Hassan Awadallah, Dragomir Radev
We also provide explanations for models and evaluation metrics to help users understand the model behaviors and select models that best suit their needs.
1 code implementation • Findings (ACL) 2021 • Chang Shu, Yusen Zhang, Xiangyu Dong, Peng Shi, Tao Yu, Rui Zhang
Text generation from semantic parses is to generate textual descriptions for formal representation inputs such as logic forms and SQL queries.
1 code implementation • 23 Oct 2020 • Yusen Zhang, Xiangyu Dong, Shuaichen Chang, Tao Yu, Peng Shi, Rui Zhang
Neural models have achieved significant results on the text-to-SQL task, in which most current work assumes all the input questions are legal and generates a SQL query for any input.
1 code implementation • 12 Jul 2019 • Hui Chen, Zijia Lin, Guiguang Ding, JianGuang Lou, Yusen Zhang, Borje Karlsson
The dominant approaches for named entity recognition (NER) mostly adopt complex recurrent neural networks (RNN), e. g., long-short-term-memory (LSTM).
Ranked #23 on Named Entity Recognition (NER) on Ontonotes v5 (English)