no code implementations • 1 Apr 2024 • Yi-Lin Tuan, Xilun Chen, Eric Michael Smith, Louis Martin, Soumya Batra, Asli Celikyilmaz, William Yang Wang, Daniel M. Bikel
As large language models (LLMs) become easily accessible nowadays, the trade-off between safety and helpfulness can significantly impact user experience.
1 code implementation • 20 Dec 2022 • Yi-Lin Tuan, Alon Albalak, Wenda Xu, Michael Saxon, Connor Pryor, Lise Getoor, William Yang Wang
Despite their widespread adoption, neural conversation models have yet to exhibit natural chat capabilities with humans.
no code implementations • 7 Oct 2022 • Yi-Lin Tuan, Zih-Yun Chiu, William Yang Wang
A core problem in machine learning is to learn expressive latent variables for model prediction on complex data that involves multiple sub-components in a flexible and interpretable fashion.
1 code implementation • NeurIPS 2023 • Zih-Yun Chiu, Yi-Lin Tuan, William Yang Wang, Michael C. Yip
In this work, we present Knowledge-Grounded RL (KGRL), an RL paradigm fusing multiple knowledge policies and aiming for human-like efficiency and flexibility.
1 code implementation • 12 May 2022 • Alon Albalak, Yi-Lin Tuan, Pegah Jandaghi, Connor Pryor, Luke Yoffe, Deepak Ramachandran, Lise Getoor, Jay Pujara, William Yang Wang
Task transfer, transferring knowledge contained in related tasks, holds the promise of reducing the quantity of labeled data required to fine-tune language models.
no code implementations • Findings (ACL) 2022 • Kai Nakamura, Sharon Levy, Yi-Lin Tuan, Wenhu Chen, William Yang Wang
A pressing challenge in current dialogue systems is to successfully converse with users on topics with information distributed across different modalities.
1 code implementation • Findings (ACL) 2022 • Yi-Lin Tuan, Sajjad Beygi, Maryam Fazel-Zarandi, Qiaozi Gao, Alessandra Cervone, William Yang Wang
Our proposed method allows a single transformer model to directly walk on a large-scale knowledge graph to generate responses.
no code implementations • 10 Mar 2022 • Lucas Relic, BoWen Zhang, Yi-Lin Tuan, Michael Beyeler
Retinal implants have the potential to treat incurable blindness, yet the quality of the artificial vision they produce is still rudimentary.
1 code implementation • NLP4ConvAI (ACL) 2022 • Alon Albalak, Varun Embar, Yi-Lin Tuan, Lise Getoor, William Yang Wang
Existing research studies on cross-sentence relation extraction in long-form multi-party conversations aim to improve relation extraction without considering the explainability of such methods.
Ranked #7 on Dialog Relation Extraction on DialogRE
no code implementations • 4 Aug 2021 • Zih-Yun Chiu, Yi-Lin Tuan, Hung-Yi Lee, Li-Chen Fu
For reinforcement learning (RL), it is challenging for an agent to master a task that requires a specific series of actions due to sparse rewards.
1 code implementation • NeurIPS 2021 • Yi-Lin Tuan, Connor Pryor, Wenhu Chen, Lise Getoor, William Yang Wang
To gain insights into the reasoning process of a generation model, we propose a new method, local explanation of response generation (LERG) that regards the explanations as the mutual interaction of segments in input and output sentences.
no code implementations • EACL 2021 • Yi-Lin Tuan, Ahmed El-Kishky, Adithya Renduchintala, Vishrav Chaudhary, Francisco Guzmán, Lucia Specia
Quality estimation aims to measure the quality of translated content without access to a reference translation.
no code implementations • 30 Apr 2020 • Yi-Lin Tuan, Wei Wei, William Yang Wang
First, we train a large-scale language model and query it as textual knowledge.
1 code implementation • IJCNLP 2019 • Yi-Lin Tuan, Yun-Nung Chen, Hung-Yi Lee
This paper proposes a new task about how to apply dynamic knowledge graphs in neural conversation model and presents a novel TV series conversation corpus (DyKgChat) for the task.
no code implementations • 24 Aug 2018 • Yi-Lin Tuan, Jinzhi Zhang, Yujia Li, Hung-Yi Lee
In sequence generation task, many works use policy gradient for model optimization to tackle the intractable backpropagation issue when maximizing the non-differentiable evaluation metrics or fooling the discriminator in adversarial learning.
1 code implementation • 16 Aug 2018 • Yi-Lin Tuan, Hung-Yi Lee
To stabilize the training of SeqGAN, Monte Carlo tree search (MCTS) or reward at every generation step (REGS) is used to evaluate the goodness of a generated subsequence.
no code implementations • 15 Apr 2018 • Che-Ping Tsai, Yi-Lin Tuan, Lin-shan Lee
Spoken content processing (such as retrieval and browsing) is maturing, but the singing content is still almost completely left out.