no code implementations • NAACL (ACL) 2022 • Judith Gaspers, Anoop Kumar, Greg Ver Steeg, Aram Galstyan
Spoken Language Understanding (SLU) models in industry applications are usually trained offline on historic data, but have to perform well on incoming user requests after deployment.
no code implementations • ACL 2022 • Samuel Broscheit, Quynh Do, Judith Gaspers
Experiments show that a state-of-the-art BERT-based model suffers performance loss under this drift.
no code implementations • EACL (AdaptNLP) 2021 • Judith Gaspers, Quynh Do, Tobias Röding, Melanie Bradford
This paper provides the first experimental study on the impact of using domain-specific representations on a BERT-based multi-task spoken language understanding (SLU) model for multi-domain applications.
no code implementations • COLING 2020 • Quynh Do, Judith Gaspers, Tobias Roding, Melanie Bradford
This paper addresses the question as to what degree a BERT-based multilingual Spoken Language Understanding (SLU) model can transfer knowledge across languages.
no code implementations • 6 Aug 2020 • Judith Gaspers, Quynh Do, Fabian Triefenbach
Despite the fact that data imbalance is becoming more and more common in real-world Spoken Language Understanding (SLU) applications, it has not been studied extensively in the literature.
no code implementations • IJCNLP 2019 • Quynh Do, Judith Gaspers
A typical cross-lingual transfer learning approach boosting model performance on a language is to pre-train the model on all available supervised data from another language.
no code implementations • NAACL 2019 • Andrew Johnson, Penny Karanasou, Judith Gaspers, Dietrich Klakow
This work explores cross-lingual transfer learning (TL) for named entity recognition, focusing on bootstrapping Japanese from English.
no code implementations • 3 Apr 2019 • Quynh Ngoc Thi Do, Judith Gaspers
Typically, spoken language understanding (SLU) models are trained on annotated data which are costly to gather.
no code implementations • 22 Aug 2018 • Abdalghani Abujabal, Judith Gaspers
Named entity recognition (NER) is a vital task in spoken language understanding, which aims to identify mentions of named entities in text e. g., from transcribed speech.
no code implementations • NAACL 2018 • Judith Gaspers, Penny Karanasou, Rajen Chatterjee
The goal is to decrease the cost and time needed to get an annotated corpus for the new language, while still having a large enough coverage of user requests.