no code implementations • 16 Nov 2023 • Chia-Hsuan Lee, Hao Cheng, Mari Ostendorf
Large language models (LLMs) have revolutionized the landscape of Natural Language Processing systems, but are computationally expensive.
no code implementations • 13 Jul 2023 • Bo-Ru Lu, Nikita Haduong, Chia-Hsuan Lee, Zeqiu Wu, Hao Cheng, Paul Koester, Jean Utke, Tao Yu, Noah A. Smith, Mari Ostendorf
The capabilities of pretrained language models have opened opportunities to explore new application areas, but applications involving human-human interaction are limited by the fact that most data is protected from public release for privacy reasons.
no code implementations • NAACL (MIA) 2022 • Akari Asai, Shayne Longpre, Jungo Kasai, Chia-Hsuan Lee, Rui Zhang, Junjie Hu, Ikuya Yamada, Jonathan H. Clark, Eunsol Choi
We present the results of the Workshop on Multilingual Information Access (MIA) 2022 Shared Task, evaluating cross-lingual open-retrieval question answering (QA) systems in 16 typologically diverse languages.
1 code implementation • 16 Mar 2022 • Yushi Hu, Chia-Hsuan Lee, Tianbao Xie, Tao Yu, Noah A. Smith, Mari Ostendorf
In this work, we propose an in-context learning (ICL) framework for zero-shot and few-shot learning DST, where a large pre-trained language model (LM) takes a test instance and a few exemplars as input, and directly decodes the dialogue state without any parameter updates.
no code implementations • Findings (NAACL) 2022 • Chia-Hsuan Lee, Aditya Siddhant, Viresh Ratnakar, Melvin Johnson
In this paper, we introduce DOCmT5, a multilingual sequence-to-sequence language model pretrained with large scale parallel documents.
Ranked #1 on Document Translation on WMT 2020
1 code implementation • EMNLP 2021 • Chia-Hsuan Lee, Hao Cheng, Mari Ostendorf
Task-oriented conversational systems often use dialogue state tracking to represent the user's intentions, which involves filling in values of pre-defined slots.
Ranked #1 on Dialogue State Tracking on MULTIWOZ 2.1 (MultiWOZ (Joint Goal Acc) metric)
2 code implementations • ACL 2021 • Chia-Hsuan Lee, Oleksandr Polozov, Matthew Richardson
The goal of database question answering is to enable natural language querying of real-life relational databases in diverse application domains.
Ranked #1 on Text-To-SQL on KaggleDBQA
no code implementations • 13 Jul 2019 • Chia-Hsuan Lee, Hung-Yi Lee
In this paper, we explore the problem of cross-lingual transfer learning for QA, where a source language task with plentiful annotations is utilized to improve the performance of a QA model on a target language task with limited available annotations.
1 code implementation • 16 Apr 2019 • Chia-Hsuan Lee, Yun-Nung Chen, Hung-Yi Lee
Spoken question answering (SQA) is challenging due to complex reasoning on top of the spoken documents.
Ranked #3 on Spoken Language Understanding on Spoken-SQuAD
Automatic Speech Recognition Automatic Speech Recognition (ASR) +4
1 code implementation • 7 Aug 2018 • Chia-Hsuan Lee, Shang-Ming Wang, Huan-Cheng Chang, Hung-Yi Lee
Reading comprehension by machine has been widely studied, but machine comprehension of spoken content is still a less investigated problem.