no code implementations • NAACL (maiworkshop) 2021 • Han Ding, Li Erran Li, Zhiting Hu, Yi Xu, Dilek Hakkani-Tur, Zheng Du, Belinda Zeng
Recent vision-language understanding approaches adopt a multi-modal transformer pre-training and finetuning paradigm.
no code implementations • SIGDIAL (ACL) 2022 • Spandana Gella, Aishwarya Padmakumar, Patrick Lange, Dilek Hakkani-Tur
Embodied agents need to be able to interact in natural language – understanding task descriptions and asking appropriate follow up questions to obtain necessary information to be effective at successfully accomplishing tasks for a wide range of users.
no code implementations • INLG (ACL) 2020 • Behnam Hedayatnia, Karthik Gopalakrishnan, Seokhwan Kim, Yang Liu, Mihail Eric, Dilek Hakkani-Tur
Open-domain dialog systems aim to generate relevant, informative and engaging responses.
no code implementations • EMNLP (NLP4ConvAI) 2021 • Pei Zhou, Behnam Hedayatnia, Karthik Gopalakrishnan, Seokhwan Kim, Jay Pujara, Xiang Ren, Yang Liu, Dilek Hakkani-Tur
We further investigate can such models identify when to generate implicit background knowledge and when it is not necessary.
no code implementations • NAACL 2022 • Sha Li, Mahdi Namazifar, Di Jin, Mohit Bansal, Heng Ji, Yang Liu, Dilek Hakkani-Tur
In this work, we propose to automatically convert the background knowledge documents into document semantic graphs and then perform knowledge selection over such graphs.
2 code implementations • 23 Aug 2023 • Karthik Gopalakrishnan, Behnam Hedayatnia, Qinlang Chen, Anna Gottardi, Sanjeev Kwatra, Anu Venkatesh, Raefer Gabriel, Dilek Hakkani-Tur
We introduce Topical-Chat, a knowledge-grounded human-human conversation dataset where the underlying knowledge spans 8 broad topics and conversation partners don't have explicitly defined roles, to help further research in open-domain conversational AI.
no code implementations • 9 Aug 2023 • Hangjie Shi, Leslie Ball, Govind Thattai, Desheng Zhang, Lucy Hu, Qiaozi Gao, Suhaila Shakiah, Xiaofeng Gao, Aishwarya Padmakumar, Bofei Yang, Cadence Chung, Dinakar Guthy, Gaurav Sukhatme, Karthika Arumugam, Matthew Wen, Osman Ipek, Patrick Lange, Rohan Khanna, Shreyas Pansare, Vasu Sharma, Chao Zhang, Cris Flagg, Daniel Pressel, Lavina Vaz, Luke Dai, Prasoon Goyal, Sattvik Sahai, Shaohua Liu, Yao Lu, Anna Gottardi, Shui Hu, Yang Liu, Dilek Hakkani-Tur, Kate Bland, Heather Rocker, James Jeun, Yadunandana Rao, Michael Johnston, Akshaya Iyengar, Arindam Mandal, Prem Natarajan, Reza Ghanadan
The Alexa Prize program has empowered numerous university students to explore, experiment, and showcase their talents in building conversational agents through challenges like the SocialBot Grand Challenge and the TaskBot Challenge.
1 code implementation • 20 May 2023 • Chao Zhao, Spandana Gella, Seokhwan Kim, Di Jin, Devamanyu Hazarika, Alexandros Papangelis, Behnam Hedayatnia, Mahdi Namazifar, Yang Liu, Dilek Hakkani-Tur
We hope this task and dataset can promote further research on TOD and subjective content understanding.
no code implementations • 10 May 2023 • Mert İnan, Aishwarya Padmakumar, Spandana Gella, Patrick Lange, Dilek Hakkani-Tur
Task planning is an important component of traditional robotics systems enabling robots to compose fine grained skills to perform more complex tasks.
no code implementations • 21 Feb 2023 • Sree Hari Krishnan Parthasarathi, Lu Zeng, Dilek Hakkani-Tur
Conversational, multi-turn, text-to-SQL (CoSQL) tasks map natural language utterances in a dialogue to SQL queries.
no code implementations • 16 Feb 2023 • Mahdi Namazifar, Devamanyu Hazarika, Dilek Hakkani-Tur
Moreover, we argue that the bias term of the value linear transformation has a more prominent role than that of the bias term of the query linear transformation.
no code implementations • 10 Feb 2023 • Yen-Ting Lin, Alexandros Papangelis, Seokhwan Kim, Sungjin Lee, Devamanyu Hazarika, Mahdi Namazifar, Di Jin, Yang Liu, Dilek Hakkani-Tur
This work focuses on in-context data augmentation for intent detection.
Ranked #1 on Intent Detection on BANKING77 10-shot
1 code implementation • 7 Feb 2023 • Maximillian Chen, Alexandros Papangelis, Chenyang Tao, Seokhwan Kim, Andy Rosenbaum, Yang Liu, Zhou Yu, Dilek Hakkani-Tur
Collecting high quality conversational data can be very expensive for most applications and infeasible for others due to privacy, ethical, or similar concerns.
1 code implementation • 20 Dec 2022 • Prakhar Gupta, Yang Liu, Di Jin, Behnam Hedayatnia, Spandana Gella, Sijia Liu, Patrick Lange, Julia Hirschberg, Dilek Hakkani-Tur
These guidelines provide information about the context they are applicable to and what should be included in the response, allowing the models to generate responses that are more closely aligned with the developer's expectations and intent.
1 code implementation • 26 Oct 2022 • Yifan Chen, Devamanyu Hazarika, Mahdi Namazifar, Yang Liu, Di Jin, Dilek Hakkani-Tur
Prefix-tuning, or more generally continuous prompt tuning, has become an essential paradigm of parameter-efficient transfer learning.
no code implementations • 25 Oct 2022 • Maximillian Chen, Alexandros Papangelis, Chenyang Tao, Andy Rosenbaum, Seokhwan Kim, Yang Liu, Zhou Yu, Dilek Hakkani-Tur
Dialogue understanding tasks often necessitate abundant annotated data to achieve good performance and that presents challenges in low-resource settings.
no code implementations • 19 Oct 2022 • Lu Zeng, Sree Hari Krishnan Parthasarathi, Dilek Hakkani-Tur
Text-to-SQL task maps natural language utterances to structured queries that can be issued to a database.
no code implementations • 26 Sep 2022 • Spandana Gella, Aishwarya Padmakumar, Patrick Lange, Dilek Hakkani-Tur
Embodied agents need to be able to interact in natural language understanding task descriptions and asking appropriate follow up questions to obtain necessary information to be effective at successfully accomplishing tasks for a wide range of users.
no code implementations • 13 Sep 2022 • Anna Gottardi, Osman Ipek, Giuseppe Castellucci, Shui Hu, Lavina Vaz, Yao Lu, Anju Khatri, Anjali Chadha, Desheng Zhang, Sattvik Sahai, Prerna Dwivedi, Hangjie Shi, Lucy Hu, Andy Huang, Luke Dai, Bofei Yang, Varun Somani, Pankaj Rajan, Ron Rezac, Michael Johnston, Savanna Stiff, Leslie Ball, David Carmel, Yang Liu, Dilek Hakkani-Tur, Oleg Rokhlenko, Kate Bland, Eugene Agichtein, Reza Ghanadan, Yoelle Maarek
Since its inception in 2016, the Alexa Prize program has enabled hundreds of university students to explore and compete to develop conversational agents through the SocialBot Grand Challenge.
no code implementations • SIGDIAL (ACL) 2022 • Behnam Hedayatnia, Di Jin, Yang Liu, Dilek Hakkani-Tur
In this work, we curated a dataset where responses from multiple response generators produced for the same dialog context are manually annotated as appropriate (positive) and inappropriate (negative).
1 code implementation • SIGDIAL (ACL) 2022 • Di Jin, Sijia Liu, Yang Liu, Dilek Hakkani-Tur
Previous work has treated contradiction detection in bot responses as a task similar to natural language inference, e. g., detect the contradiction between a pair of bot utterances.
no code implementations • SIGDIAL (ACL) 2022 • Yen-Ting Lin, Alexandros Papangelis, Seokhwan Kim, Dilek Hakkani-Tur
Specifically, we show that for open-domain conversations with 10\% of seed data, our approach performs close to the baseline that uses 100% of the data, while for knowledge-grounded conversations, it achieves the same using only 1% of the data, on human ratings of engagingness, fluency, and relevance.
no code implementations • 15 Jun 2022 • Jack FitzGerald, Shankar Ananthakrishnan, Konstantine Arkoudas, Davide Bernardi, Abhishek Bhagia, Claudio Delli Bovi, Jin Cao, Rakesh Chada, Amit Chauhan, Luoxin Chen, Anurag Dwarakanath, Satyam Dwivedi, Turan Gojayev, Karthik Gopalakrishnan, Thomas Gueudre, Dilek Hakkani-Tur, Wael Hamza, Jonathan Hueser, Kevin Martin Jose, Haidar Khan, Beiye Liu, Jianhua Lu, Alessandro Manzotti, Pradeep Natarajan, Karolina Owczarzak, Gokmen Oz, Enrico Palumbo, Charith Peris, Chandana Satya Prakash, Stephen Rawls, Andy Rosenbaum, Anjali Shenoy, Saleh Soltan, Mukund Harakere Sridhar, Liz Tan, Fabian Triefenbach, Pan Wei, Haiyang Yu, Shuai Zheng, Gokhan Tur, Prem Natarajan
We present results from a large-scale experiment on pretraining encoders with non-embedding parameter counts ranging from 700M to 9. 3B, their subsequent distillation into smaller models ranging from 17M-170M parameters, and their application to the Natural Language Understanding (NLU) component of a virtual assistant system.
Cross-Lingual Natural Language Inference intent-classification +5
no code implementations • 15 Jun 2022 • Sha Li, Mahdi Namazifar, Di Jin, Mohit Bansal, Heng Ji, Yang Liu, Dilek Hakkani-Tur
Providing conversation models with background knowledge has been shown to make open-domain dialogues more informative and engaging.
1 code implementation • 1 Jun 2022 • Yifan Chen, Tianning Xu, Dilek Hakkani-Tur, Di Jin, Yun Yang, Ruoqing Zhu
This paper revisits the approach from a matrix approximation perspective, and identifies two issues in the existing layer-wise sampling methods: suboptimal sampling probabilities and estimation biases induced by sampling without replacement.
no code implementations • 1 Jun 2022 • Alexandros Papangelis, Nicole Chartier, Pankaj Rajan, Julia Hirschberg, Dilek Hakkani-Tur
In this work, we conduct a study to better understand how people rate their interactions with conversational agents.
no code implementations • insights (ACL) 2022 • Hyounghun Kim, Aishwarya Padmakumar, Di Jin, Mohit Bansal, Dilek Hakkani-Tur
Natural language guided embodied task completion is a challenging problem since it requires understanding natural language instructions, aligning them with egocentric visual observations, and choosing appropriate actions to execute in the environment to produce desired changes.
1 code implementation • Findings (NAACL) 2022 • Yifan Chen, Devamanyu Hazarika, Mahdi Namazifar, Yang Liu, Di Jin, Dilek Hakkani-Tur
The massive amount of trainable parameters in the pre-trained language models (PLMs) makes them hard to be deployed to multiple downstream tasks.
1 code implementation • Findings (ACL) 2022 • Sarik Ghazarian, Behnam Hedayatnia, Alexandros Papangelis, Yang Liu, Dilek Hakkani-Tur
Existing model-based metrics for system response evaluation are trained on human annotated data, which is cumbersome to collect.
no code implementations • 22 Mar 2022 • Di Jin, Shuyang Gao, Seokhwan Kim, Yang Liu, Dilek Hakkani-Tur
In many real-world settings, machine learning models need to identify user inputs that are out-of-domain (OOD) so as to avoid performing wrong actions.
no code implementations • 18 Mar 2022 • Shikib Mehri, Jinho Choi, Luis Fernando D'Haro, Jan Deriu, Maxine Eskenazi, Milica Gasic, Kallirroi Georgila, Dilek Hakkani-Tur, Zekang Li, Verena Rieser, Samira Shaikh, David Traum, Yi-Ting Yeh, Zhou Yu, Yizhe Zhang, Chen Zhang
This is a report on the NSF Future Directions Workshop on Automatic Evaluation of Dialog.
1 code implementation • INLG (ACL) 2021 • Mihail Eric, Nicole Chartier, Behnam Hedayatnia, Karthik Gopalakrishnan, Pankaj Rajan, Yang Liu, Dilek Hakkani-Tur
Incorporating external knowledge sources effectively in conversations is a longstanding problem in open-domain dialogue research.
no code implementations • 16 Dec 2021 • Lisa Bauer, Karthik Gopalakrishnan, Spandana Gella, Yang Liu, Mohit Bansal, Dilek Hakkani-Tur
We define three broad classes of task descriptions for these tasks: statement, question, and completion, with numerous lexical variants within each class.
1 code implementation • NAACL 2022 • Yifan Chen, Qi Zeng, Dilek Hakkani-Tur, Di Jin, Heng Ji, Yun Yang
Transformer-based models are not efficient in processing long sequences due to the quadratic space and time complexity of the self-attention modules.
no code implementations • 16 Nov 2021 • Sarik Ghazarian, Behnam Hedayatnia, Alexandros Papangelis, Yang Liu, Dilek Hakkani-Tur
Automatic evaluation is beneficial for open-domain dialog system development.
no code implementations • ACL 2022 • Pei Zhou, Karthik Gopalakrishnan, Behnam Hedayatnia, Seokhwan Kim, Jay Pujara, Xiang Ren, Yang Liu, Dilek Hakkani-Tur
Implicit knowledge, such as common sense, is key to fluid human conversations.
no code implementations • 15 Oct 2021 • Yen-Ting Lin, Alexandros Papangelis, Seokhwan Kim, Dilek Hakkani-Tur
Rich, open-domain textual data available on the web resulted in great advancements for language processing.
1 code implementation • 11 Oct 2021 • Sashank Santhanam, Behnam Hedayatnia, Spandana Gella, Aishwarya Padmakumar, Seokhwan Kim, Yang Liu, Dilek Hakkani-Tur
We demonstrate the benefit of our Conv-FEVER dataset by showing that the models trained on this data perform reasonably well to detect factually inconsistent responses with respect to the provided knowledge through evaluation on our human annotated data.
3 code implementations • 1 Oct 2021 • Aishwarya Padmakumar, Jesse Thomason, Ayush Shrivastava, Patrick Lange, Anjali Narayan-Chen, Spandana Gella, Robinson Piramuthu, Gokhan Tur, Dilek Hakkani-Tur
Robots operating in human spaces must be able to engage in natural language interaction with people, both understanding and executing instructions, and using conversation to resolve ambiguity and recover from mistakes.
no code implementations • 29 Sep 2021 • Yifan Chen, Tianning Xu, Dilek Hakkani-Tur, Di Jin, Yun Yang, Ruoqing Zhu
To accelerate the training of graph convolutional networks (GCN), many sampling-based methods have been developed for approximating the embedding aggregation.
1 code implementation • 28 Sep 2021 • Seokhwan Kim, Yang Liu, Di Jin, Alexandros Papangelis, Karthik Gopalakrishnan, Behnam Hedayatnia, Dilek Hakkani-Tur
Most prior work in dialogue modeling has been on written conversations mostly because of existing data sets.
no code implementations • EMNLP (NLP4ConvAI) 2021 • Alicia Y. Tsai, Shereen Oraby, Vittorio Perera, Jiun-Yu Kao, Yuheng Du, Anjali Narayan-Chen, Tagyoung Chung, Dilek Hakkani-Tur
Our results show that while high style accuracy and semantic correctness are easier to achieve for more lexically-defined styles with conditional training, stylistic control is also achievable for more semantically complex styles using discriminator-based guided decoding methods.
1 code implementation • EMNLP (NLP4ConvAI) 2021 • Di Jin, Shuyang Gao, Seokhwan Kim, Yang Liu, Dilek Hakkani-Tur
Most prior work on task-oriented dialogue systems is restricted to supporting domain APIs.
1 code implementation • SIGDIAL (ACL) 2021 • Pei Zhou, Karthik Gopalakrishnan, Behnam Hedayatnia, Seokhwan Kim, Jay Pujara, Xiang Ren, Yang Liu, Dilek Hakkani-Tur
Moreover, existing dialogue datasets do not explicitly focus on exhibiting commonsense as a facet.
no code implementations • ACL (dialdoc) 2021 • Di Jin, Seokhwan Kim, Dilek Hakkani-Tur
Most prior work on task-oriented dialogue systems are restricted to limited coverage of domain APIs.
no code implementations • SIGDIAL (ACL) 2021 • Alexandros Papangelis, Karthik Gopalakrishnan, Aishwarya Padmakumar, Seokhwan Kim, Gokhan Tur, Dilek Hakkani-Tur
We show an average improvement of 35% in intent detection and 21% in slot tagging over a baseline model trained from the seed data.
no code implementations • NAACL 2021 • Mingyue Shang, Tong Wang, Mihail Eric, Jiangning Chen, Jiyang Wang, Matthew Welch, Tiantong Deng, Akshay Grewal, Han Wang, Yue Liu, Yang Liu, Dilek Hakkani-Tur
In recent years, incorporating external knowledge for response generation in open-domain conversation systems has attracted great interest.
no code implementations • EMNLP (DeeLIO) 2020 • Ting-Yun Chang, Yang Liu, Karthik Gopalakrishnan, Behnam Hedayatnia, Pei Zhou, Dilek Hakkani-Tur
Pretrained language models have excelled at many NLP tasks recently; however, their social intelligence is still unsatisfactory.
no code implementations • 12 May 2021 • Ting-Yun Chang, Yang Liu, Karthik Gopalakrishnan, Behnam Hedayatnia, Pei Zhou, Dilek Hakkani-Tur
Towards improving language models' social intelligence, we focus on the Social IQA dataset, a task requiring social and emotional commonsense reasoning.
no code implementations • NAACL 2021 • Anish Acharya, Suranjit Adhikari, Sanchit Agarwal, Vincent Auvray, Nehal Belgamwar, Arijit Biswas, Shubhra Chandra, Tagyoung Chung, Maryam Fazel-Zarandi, Raefer Gabriel, Shuyang Gao, Rahul Goel, Dilek Hakkani-Tur, Jan Jezabek, Abhay Jha, Jiun-Yu Kao, Prakash Krishnan, Peter Ku, Anuj Goyal, Chien-Wei Lin, Qing Liu, Arindam Mandal, Angeliki Metallinou, Vishal Naik, Yi Pan, Shachi Paul, Vittorio Perera, Abhishek Sethi, Minmin Shen, Nikko Strom, Eddie Wang
Finally, we evaluate our system using a typical movie ticket booking task and show that the dialogue simulator is an essential component of the system that leads to over $50\%$ improvement in turn-level action signature prediction accuracy.
1 code implementation • 22 Jan 2021 • Seokhwan Kim, Mihail Eric, Behnam Hedayatnia, Karthik Gopalakrishnan, Yang Liu, Chao-Wei Huang, Dilek Hakkani-Tur
This challenge track aims to expand the coverage of task-oriented dialogue systems by incorporating external unstructured knowledge sources.
1 code implementation • EACL 2021 • Saket Dingliwal, Bill Gao, Sanchit Agarwal, Chien-Wei Lin, Tagyoung Chung, Dilek Hakkani-Tur
Dialogue State Tracking (DST) forms a core component of automated chatbot based systems designed for specific goals like hotel, taxi reservation, tourist information, etc.
no code implementations • 2 Dec 2020 • Qing Ping, Feiyang Niu, Govind Thattai, Joel Chengottusseriyil, Qiaozi Gao, Aishwarya Reganti, Prashanth Rajagopal, Gokhan Tur, Dilek Hakkani-Tur, Prem Nataraja
Current conversational AI systems aim to understand a set of pre-designed requests and execute related actions, which limits them to evolve naturally and adapt based on human interactions.
no code implementations • 16 Nov 2020 • Chien-Wei Lin, Vincent Auvray, Daniel Elkind, Arijit Biswas, Maryam Fazel-Zarandi, Nehal Belgamwar, Shubhra Chandra, Matt Zhao, Angeliki Metallinou, Tagyoung Chung, Charlie Shucheng Zhu, Suranjit Adhikari, Dilek Hakkani-Tur
Our approach includes a novel goal-sampling technique for sampling plausible user goals and a dialog simulation technique that uses heuristic interplay between the user and the system (Alexa), where the user tries to achieve the sampled goal.
no code implementations • SIGDIAL (ACL) 2020 • Lena Reed, Vrindavan Harrison, Shereen Oraby, Dilek Hakkani-Tur, Marilyn Walker
Here we explore, for the first time, whether it is possible to train an NLG for a new larger ontology using existing training sets for the restaurant domain, where each set is based on a different ontology.
1 code implementation • 28 Sep 2020 • Shikib Mehri, Mihail Eric, Dilek Hakkani-Tur
A long-standing goal of task-oriented dialogue research is the ability to flexibly adapt dialogue models to new domains.
Ranked #5 on Multi-domain Dialogue State Tracking on MULTIWOZ 2.1 (using extra training data)
1 code implementation • 18 Aug 2020 • Karthik Gopalakrishnan, Behnam Hedayatnia, Longshaokan Wang, Yang Liu, Dilek Hakkani-Tur
Large end-to-end neural open-domain chatbots are becoming increasingly popular.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +2
2 code implementations • SIGDIAL (ACL) 2020 • Seokhwan Kim, Mihail Eric, Karthik Gopalakrishnan, Behnam Hedayatnia, Yang Liu, Dilek Hakkani-Tur
In this paper, we propose to expand coverage of task-oriented dialogue systems by incorporating external unstructured knowledge sources.
no code implementations • 26 May 2020 • Behnam Hedayatnia, Karthik Gopalakrishnan, Seokhwan Kim, Yang Liu, Mihail Eric, Dilek Hakkani-Tur
In this paper, we propose using a dialogue policy to plan the content and style of target responses in the form of an action plan, which includes knowledge sentences related to the dialogue context, targeted dialogue acts, topic information, etc.
1 code implementation • INLG (ACL) 2020 • Yuheng Du, Shereen Oraby, Vittorio Perera, Minmin Shen, Anjali Narayan-Chen, Tagyoung Chung, Anu Venkatesh, Dilek Hakkani-Tur
We train different state-of-the-art models for neural natural language generation on this dataset and show that in many cases, including rich schema information allows our models to produce higher quality outputs both in terms of semantics and diversity.
1 code implementation • WS 2020 • Shuyang Gao, Sanchit Agarwal, Tagyoung Chung, Di Jin, Dilek Hakkani-Tur
In this paper, we propose using machine reading comprehension (RC) in state tracking from two perspectives: model architectures and datasets.
no code implementations • 7 Feb 2020 • Adarsh Kumar, Peter Ku, Anuj Kumar Goyal, Angeliki Metallinou, Dilek Hakkani-Tur
Task oriented dialog agents provide a natural language interface for users to complete their goal.
no code implementations • 2 Dec 2019 • Ta-Chung Chi, Mihail Eric, Seokhwan Kim, Minmin Shen, Dilek Hakkani-Tur
We demonstrate the proposed strategy is substantially more realistic and data-efficient compared to previously proposed pre-exploration techniques.
2 code implementations • 1 Oct 2019 • Di Jin, Shuyang Gao, Jiun-Yu Kao, Tagyoung Chung, Dilek Hakkani-Tur
Machine Reading Comprehension (MRC) for question answering (QA), which aims to answer a question given the relevant context passages, is an important way to test the ability of intelligence systems to understand human language.
no code implementations • WS 2019 • Semih Yavuz, Abhinav Rastogi, Guan-Lin Chao, Dilek Hakkani-Tur
Recent advances in neural sequence-to-sequence models have led to promising results for several language generation-based tasks, including dialogue response generation, summarization, and machine translation.
no code implementations • WS 2019 • Shuyang Gao, Abhishek Sethi, Sanchit Agarwal, Tagyoung Chung, Dilek Hakkani-Tur
In contrast to traditional state tracking methods where the dialog state is often predicted as a distribution over a closed set of all the possible slot values within an ontology, our method uses a simple attention-based neural network to point to the slot values within the conversation.
Ranked #19 on Multi-domain Dialogue State Tracking on MULTIWOZ 2.0
5 code implementations • LREC 2020 • Mihail Eric, Rahul Goel, Shachi Paul, Adarsh Kumar, Abhishek Sethi, Peter Ku, Anuj Kumar Goyal, Sanchit Agarwal, Shuyang Gao, Dilek Hakkani-Tur
To fix the noisy state annotations, we use crowdsourced workers to re-annotate state and utterances based on the original utterances in the dataset.
Ranked #16 on Multi-domain Dialogue State Tracking on MULTIWOZ 2.0
Dialogue State Tracking Multi-domain Dialogue State Tracking
1 code implementation • ACL 2019 • Darsh J Shah, Raghav Gupta, Amir A Fayazi, Dilek Hakkani-Tur
Task-oriented dialog systems increasingly rely on deep learning-based slot filling models, usually needing extensive labeled training data for target domains.
no code implementations • WS 2019 • Sanghyun Yi, Rahul Goel, Chandra Khatri, Alessandra Cervone, Tagyoung Chung, Behnam Hedayatnia, Anu Venkatesh, Raefer Gabriel, Dilek Hakkani-Tur
Having explicit feedback on the relevance and interestingness of a system response at each turn can be a useful signal for mitigating such issues and improving system quality by selecting responses from different approaches.
no code implementations • WS 2019 • Alessandra Cervone, Chandra Khatri, Rahul Goel, Behnam Hedayatnia, Anu Venkatesh, Dilek Hakkani-Tur, Raefer Gabriel
Our experiments show the feasibility of learning statistical NLG models for open-domain QA with larger ontologies.
no code implementations • 27 Dec 2018 • Chandra Khatri, Behnam Hedayatnia, Anu Venkatesh, Jeff Nunn, Yi Pan, Qing Liu, Han Song, Anna Gottardi, Sanjeev Kwatra, Sanju Pancholi, Ming Cheng, Qinglang Chen, Lauren Stubel, Karthik Gopalakrishnan, Kate Bland, Raefer Gabriel, Arindam Mandal, Dilek Hakkani-Tur, Gene Hwang, Nate Michel, Eric King, Rohit Prasad
In the second iteration of the competition in 2018, university teams advanced the state of the art by using context in dialog models, leveraging knowledge graphs for language understanding, handling complex utterances, building statistical and hierarchical dialog managers, and leveraging model-driven signals from user responses.
no code implementations • ICLR 2019 • Izzeddin Gur, Ulrich Rueckert, Aleksandra Faust, Dilek Hakkani-Tur
Even though recent approaches improve the success rate on relatively simple environments with the help of human demonstrations to guide the exploration, they still fail in environments where the set of possible instructions can reach millions.
no code implementations • 30 Nov 2018 • Rahul Goel, Shachi Paul, Tagyoung Chung, Jeremie Lecomte, Arindam Mandal, Dilek Hakkani-Tur
This limits such systems in two different ways: If there is an update in the task domain, the dialogue system usually needs to be updated or completely re-trained.
no code implementations • WS 2018 • Abhinav Rastogi, Raghav Gupta, Dilek Hakkani-Tur
This paper presents a novel approach for multi-task learning of language understanding (LU) and dialogue state tracking (DST) in task-oriented dialogue systems.
no code implementations • 11 Nov 2018 • Izzeddin Gur, Dilek Hakkani-Tur, Gokhan Tur, Pararth Shah
We further develop several variants by utilizing a latent variable model to inject random variations into user responses to promote diversity in simulated user responses and a novel goal regularization mechanism to penalize divergence of user responses from the initial user goal.
no code implementations • 24 Oct 2018 • Nevan Wichers, Dilek Hakkani-Tur, Jindong Chen
Images may have elements containing text and a bounding box associated with them, for example, text identified via optical character recognition on a computer screen image, or a natural image with labeled objects.
Optical Character Recognition Optical Character Recognition (OCR) +1
no code implementations • 1 Jul 2018 • Raghav Gupta, Abhinav Rastogi, Dilek Hakkani-Tur
In task-oriented dialogue systems, spoken language understanding, or SLU, refers to the task of parsing natural language user utterances into semantic frames.
1 code implementation • NAACL 2018 • Bing Liu, Gokhan Tur, Dilek Hakkani-Tur, Pararth Shah, Larry Heck
To address this challenge, we propose a hybrid imitation and reinforcement learning method, with which a dialogue agent can effectively learn from its interaction with users by learning from human teaching and feedback.
1 code implementation • 29 Dec 2017 • Abhinav Rastogi, Dilek Hakkani-Tur, Larry Heck
We introduce a novel framework for state tracking which is independent of the slot value set, and represent the dialogue state as a distribution over a set of values of interest (candidate set) derived from the dialogue history or knowledge.
Dialogue State Tracking Multi-domain Dialogue State Tracking +2
1 code implementation • 22 Dec 2017 • Saurabh Kumar, Pararth Shah, Dilek Hakkani-Tur, Larry Heck
We present a framework combining hierarchical and multi-agent deep reinforcement learning approaches to solve coordination problems among a multitude of agents using a semi-decentralized model.
no code implementations • 29 Nov 2017 • Bing Liu, Gokhan Tur, Dilek Hakkani-Tur, Pararth Shah, Larry Heck
We show that deep RL based optimization leads to significant improvement on task success rate and reduction in dialogue length comparing to supervised training model.
1 code implementation • 7 Jul 2017 • Ankur Bapna, Gokhan Tur, Dilek Hakkani-Tur, Larry Heck
While multi-task training of such models alleviates the need for large in-domain annotated datasets, bootstrapping a semantic parsing model for a new domain using only the semantic frame, such as the back-end API or knowledge graph schema, is still one of the holy grail tasks of language understanding for dialogue systems.
1 code implementation • WS 2017 • Ankur Bapna, Gokhan Tur, Dilek Hakkani-Tur, Larry Heck
We compare the performance of our proposed architecture with two context models, one that uses just the previous turn context and another that encodes dialogue context in a memory network, but loses the order of utterances in the dialogue history.
Goal-Oriented Dialogue Systems Spoken Language Understanding
1 code implementation • 3 Dec 2016 • Xuesong Yang, Yun-Nung Chen, Dilek Hakkani-Tur, Paul Crook, Xiujun Li, Jianfeng Gao, Li Deng
Natural language understanding and dialogue policy learning are both essential in conversational systems that predict the next system actions in response to a current user utterance.
no code implementations • 12 Sep 2016 • Yun-Nung Chen, Dilek Hakkani-Tur, Gokhan Tur, Asli Celikyilmaz, Jianfeng Gao, Li Deng
Natural language understanding (NLU) is a core component of a spoken dialogue system.
no code implementations • 25 Jun 2016 • Lu Wang, Larry Heck, Dilek Hakkani-Tur
Our session-based models outperform the state-of-the-art method for entity extraction task in SDS.
no code implementations • 20 Dec 2013 • Yann N. Dauphin, Gokhan Tur, Dilek Hakkani-Tur, Larry Heck
We propose a novel zero-shot learning method for semantic utterance classification (SUC).