no code implementations • 2 Jun 2023 • Yosuke Kashiwagi, Siddhant Arora, Hayato Futami, Jessica Huynh, Shih-Lun Wu, Yifan Peng, Brian Yan, Emiru Tsunoo, Shinji Watanabe
We reduce the model size by applying tensor decomposition to the Conformer and E-Branchformer architectures used in our E2E SLU models.
no code implementations • 2 May 2023 • Siddhant Arora, Hayato Futami, Shih-Lun Wu, Jessica Huynh, Yifan Peng, Yosuke Kashiwagi, Emiru Tsunoo, Brian Yan, Shinji Watanabe
Recently there have been efforts to introduce new benchmark tasks for spoken language understanding (SLU), like semantic parsing.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +3
no code implementations • 2 May 2023 • Hayato Futami, Jessica Huynh, Siddhant Arora, Shih-Lun Wu, Yosuke Kashiwagi, Yifan Peng, Brian Yan, Emiru Tsunoo, Shinji Watanabe
In the track, we adopt a pipeline approach of ASR and NLU.
no code implementations • 27 Jan 2023 • Jessica Huynh, Cathy Jiao, Prakhar Gupta, Shikib Mehri, Payal Bajaj, Vishrav Chaudhary, Maxine Eskenazi
The paper shows that the choice of datasets used for training a model contributes to how well it performs on a task as well as on how the prompt should be structured.
no code implementations • SIGDIAL (ACL) 2022 • Jessica Huynh, Shikib Mehri, Cathy Jiao, Maxine Eskenazi
The DialPort project http://dialport. org/, funded by the National Science Foundation (NSF), covers a group of tools and services that aim at fulfilling the needs of the dialog research community.
no code implementations • LREC 2022 • Jessica Huynh, Ting-Rui Chiang, Jeffrey Bigham, Maxine Eskenazi
Dialog system developers need high-quality data to train, fine-tune and assess their systems.
no code implementations • 9 Nov 2021 • Jessica Huynh, Jeffrey Bigham, Maxine Eskenazi
It also has the effect of giving the requester a bad reputation on the workers' forums.
1 code implementation • INLG (ACL) 2021 • Steven Y. Feng, Jessica Huynh, Chaitanya Narisetty, Eduard Hovy, Varun Gangal
We motivate and propose a suite of simple but effective improvements for concept-to-text generation called SAPPHIRE: Set Augmentation and Post-hoc PHrase Infilling and REcombination.
no code implementations • 13 Nov 2020 • Amith Ananthram, Kailash Karthik Saravanakumar, Jessica Huynh, Homayoon Beigi
To address these two challenges, we present a multi-modal approach that first transfers learning from related tasks in speech and text to produce robust neural embeddings and then uses these embeddings to train a pLDA classifier that is able to adapt to previously unseen emotions and domains.