Search Results for author: Prakamya Mishra

Found 6 papers, 2 papers with code

SYNFAC-EDIT: Synthetic Imitation Edit Feedback for Factual Alignment in Clinical Summarization

1 code implementation21 Feb 2024 Prakamya Mishra, Zonghai Yao, Parth Vashisht, Feiyun ouyang, Beining Wang, Vidhi Dhaval Mody, Hong Yu

Large Language Models (LLMs) such as GPT & Llama have demonstrated significant achievements in summarization tasks but struggle with factual inaccuracies, a critical issue in clinical NLP applications where errors could lead to serious consequences.

Synthetic Imitation Edit Feedback for Factual Alignment in Clinical Summarization

1 code implementation30 Oct 2023 Prakamya Mishra, Zonghai Yao, Shuwei Chen, Beining Wang, Rohan Mittal, Hong Yu

In this work, we propose a new pipeline using ChatGPT instead of human experts to generate high-quality feedback data for improving factual consistency in the clinical note summarization task.

Hallucination

Bi-ISCA: Bidirectional Inter-Sentence Contextual Attention Mechanism for Detecting Sarcasm in User Generated Noisy Short Text

no code implementations23 Nov 2020 Prakamya Mishra, Saroj Kaushik, Kuntal Dey

This paper proposes a new state-of-the-art deep learning architecture that uses a novel Bidirectional Inter-Sentence Contextual Attention mechanism (Bi-ISCA) to capture inter-sentence dependencies for detecting sarcasm in the user-generated short text using only the conversational context.

Sarcasm Detection Sentence +1

STEPs-RL: Speech-Text Entanglement for Phonetically Sound Representation Learning

no code implementations23 Nov 2020 Prakamya Mishra

STEPs-RL is trained in a supervised manner to predict the phonetic sequence of a target spoken-word using its contextual spoken word's speech and text, such that the model encodes its meaningful latent representations.

Representation Learning Word Similarity

Contextualized Spoken Word Representations from Convolutional Autoencoders

no code implementations6 Jul 2020 Prakamya Mishra, Pranav Mathur

A lot of work has been done to build text-based language models for performing different NLP tasks, but not much research has been done in the case of audio-based language models.

Word Embeddings

Correlated Feature Selection for Tweet Spam Classification

no code implementations6 Nov 2019 Prakamya Mishra

This step is necessary as we can reduce the training time if we combine the highly correlated features.

Classification feature selection +1

Cannot find the paper you are looking for? You can Submit a new open access paper.