no code implementations • 16 Feb 2024 • Ting-Rui Chiang, Dani Yogatama
In this framework, we introduce (1) the notion of a common sense knowledge base, (2) a general formalism for natural language classification tasks, and the notion of (3) meaning association.
no code implementations • 16 Nov 2023 • Ting-Rui Chiang, Xinyan Velocity Yu, Joshua Robinson, Ollie Liu, Isabelle Lee, Dani Yogatama
Augmenting a language model (LM) with $k$-nearest neighbors ($k$NN) retrieval on its training data alone can decrease its perplexity, though the underlying reasons for this remain elusive.
1 code implementation • 25 Oct 2023 • Ting-Rui Chiang, Dani Yogatama
Via a synthetic dataset, our analysis suggests that distributional property indeed leads to the better sample efficiency of pretrained masked language models, but does not fully explain the generalization capability.
no code implementations • LREC 2022 • Jessica Huynh, Ting-Rui Chiang, Jeffrey Bigham, Maxine Eskenazi
Dialog system developers need high-quality data to train, fine-tune and assess their systems.
no code implementations • Findings (ACL) 2022 • Ting-Rui Chiang, Yi-Pei Chen, Yi-Ting Yeh, Graham Neubig
While multilingual training is now an essential ingredient in machine translation (MT) systems, recent work has demonstrated that it has different effects in different multilingual settings, such as many-to-one, one-to-many, and many-to-many learning.
no code implementations • 12 Oct 2021 • Ting-Rui Chiang, Yi-Ting Yeh, Ta-Chung Chi, Yau-Shian Wang
ALFRED is a recently proposed benchmark that requires a model to complete tasks in simulated house environments specified by instructions in natural language.
no code implementations • 11 Oct 2021 • Ting-Rui Chiang
Despite the success of pretrained masked language models (MLM), why MLM pretraining is useful is still a qeustion not fully answered.
no code implementations • EMNLP (NLP4ConvAI) 2021 • Ting-Rui Chiang, Yi-Ting Yeh
Dialogue state tracking models play an important role in a task-oriented dialogue system.
no code implementations • EMNLP (BlackboxNLP) 2021 • Ting-Rui Chiang, Yun-Nung Chen
This work focuses on relating two mysteries in neural-based text generation: exposure bias, and text degeneration.
no code implementations • 14 Jun 2021 • Ting-Rui Chiang, Yun-Nung Chen
Hence, the acceptable deduction of performance on the pre-trained task when distilling a model can be derived from the results, and we further compare the behavior of the pruned model before and after fine-tuning.
1 code implementation • 24 Sep 2019 • Ting-Rui Chiang, Hao-Tong Ye, Yun-Nung Chen
However, to best of our knowledge, two important questions for conversational comprehension research have not been well studied: 1) How well can the benchmark dataset reflect models' content understanding?
no code implementations • 21 Mar 2019 • Chao-Wei Huang, Ting-Rui Chiang, Shang-Yu Su, Yun-Nung Chen
The response selection has been an emerging research topic due to the growing interest in dialogue modeling, where the goal of the task is to select an appropriate response for continuing dialogues.
no code implementations • 21 Mar 2019 • Ting-Rui Chiang, Chao-Wei Huang, Shang-Yu Su, Yun-Nung Chen
With the increasing research interest in dialogue response generation, there is an emerging branch formulating this task as selecting next sentences, where given the partial dialogue contexts, the goal is to determine the most probable next sentence.
1 code implementation • NAACL 2019 • Ting-Rui Chiang, Yun-Nung Chen
Solving math word problems is a challenging task that requires accurate natural language understanding to bridge natural language texts and math expressions.