no code implementations • 26 Feb 2024 • Todd Morrill, Zhaoyuan Deng, Yanda Chen, Amith Ananthram, Colin Wayne Leach, Kathleen McKeown
Based on these results showing the utility of social orientation tags for dialogue outcome prediction tasks, we release our data sets, code, and models that are fine-tuned to predict social orientation tags on dialogue utterances.
no code implementations • 19 Feb 2024 • Yanda Chen, Chen Zhao, Zhou Yu, Kathleen McKeown, He He
Pre-trained language models (LMs) are capable of in-context learning (ICL): they can adapt to a task with only a few examples given in the prompt without any parameter update.
1 code implementation • 25 Jan 2024 • Yanda Chen, Chandan Singh, Xiaodong Liu, Simiao Zuo, Bin Yu, He He, Jianfeng Gao
We propose explanation-consistency finetuning (EC-finetuning), a method that adapts LLMs to generate more consistent natural-language explanations on related examples.
no code implementations • 17 Jul 2023 • Yanda Chen, Ruiqi Zhong, Narutatsu Ri, Chen Zhao, He He, Jacob Steinhardt, Zhou Yu, Kathleen McKeown
To answer these questions, we propose to evaluate $\textbf{counterfactual simulatability}$ of natural language explanations: whether an explanation can enable humans to precisely infer the model's outputs on diverse counterfactuals of the explained input.
no code implementations • 20 Dec 2022 • Yukun Huang, Yanda Chen, Zhou Yu, Kathleen McKeown
We propose to combine in-context learning objectives with language modeling objectives to distill both the ability to read in-context examples and task knowledge to the smaller models.
1 code implementation • 16 Sep 2022 • Yanda Chen, Chen Zhao, Zhou Yu, Kathleen McKeown, He He
In-context learning (ICL) suffers from oversensitivity to the prompt, making it unreliable in real-world scenarios.
1 code implementation • ACL 2022 • Yanda Chen, Ruiqi Zhong, Sheng Zha, George Karypis, He He
The goal of meta-learning is to learn to adapt to a new task with only a few labeled examples.
no code implementations • ACL 2021 • Yanda Chen, Chris Kedzie, Suraj Nair, Petra Galuščáková, Rui Zhang, Douglas W. Oard, Kathleen McKeown
This paper proposes an approach to cross-language sentence selection in a low-resource setting.
no code implementations • 24 Oct 2020 • Yanda Chen, Md Arafat Sultan, Vittorio Castelli
Automatically generated synthetic training examples have been shown to improve performance in machine reading comprehension (MRC).
1 code implementation • IJCNLP 2019 • Ruiqi Zhong, Yanda Chen, Desmond Patton, Charlotte Selous, Kathy Mckeown
Gang-involved youth in cities such as Chicago sometimes post on social media to express their aggression towards rival gangs and previous research has demonstrated that a deep learning approach can predict aggression and loss in posts.