1 code implementation • 21 Feb 2024 • Prakamya Mishra, Zonghai Yao, Parth Vashisht, Feiyun ouyang, Beining Wang, Vidhi Dhaval Mody, Hong Yu
Large Language Models (LLMs) such as GPT & Llama have demonstrated significant achievements in summarization tasks but struggle with factual inaccuracies, a critical issue in clinical NLP applications where errors could lead to serious consequences.
1 code implementation • 30 Oct 2023 • Prakamya Mishra, Zonghai Yao, Shuwei Chen, Beining Wang, Rohan Mittal, Hong Yu
In this work, we propose a new pipeline using ChatGPT instead of human experts to generate high-quality feedback data for improving factual consistency in the clinical note summarization task.
no code implementations • 23 Nov 2020 • Prakamya Mishra, Saroj Kaushik, Kuntal Dey
This paper proposes a new state-of-the-art deep learning architecture that uses a novel Bidirectional Inter-Sentence Contextual Attention mechanism (Bi-ISCA) to capture inter-sentence dependencies for detecting sarcasm in the user-generated short text using only the conversational context.
no code implementations • 23 Nov 2020 • Prakamya Mishra
STEPs-RL is trained in a supervised manner to predict the phonetic sequence of a target spoken-word using its contextual spoken word's speech and text, such that the model encodes its meaningful latent representations.
no code implementations • 6 Jul 2020 • Prakamya Mishra, Pranav Mathur
A lot of work has been done to build text-based language models for performing different NLP tasks, but not much research has been done in the case of audio-based language models.
no code implementations • 6 Nov 2019 • Prakamya Mishra
This step is necessary as we can reduce the training time if we combine the highly correlated features.