1 code implementation • ICON 2021 • Sai Muralidhar Jayanthi, Varsha Embar, Karthik Raghunathan
The wide applicability of pretrained transformer models (PTMs) for natural language tasks is well demonstrated, but their ability to comprehend short phrases of text is less explored.
1 code implementation • 15 Dec 2021 • Sai Muralidhar Jayanthi, Varsha Embar, Karthik Raghunathan
The wide applicability of pretrained transformer models (PTMs) for natural language tasks is well demonstrated, but their ability to comprehend short phrases of text is less explored.
no code implementations • IJCNLP 2019 • Arushi Raghuvanshi, Vijay Ramakrishnan, Varsha Embar, Lucien Carroll, Karthik Raghunathan
Large vocabulary domain-agnostic Automatic Speech Recognition (ASR) systems often mistranscribe domain-specific words and phrases.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +5
no code implementations • WS 2018 • Soumya Wadhwa, Varsha Embar, Matthias Grabmair, Eric Nyberg
In this paper, we investigate the tendency of end-to-end neural Machine Reading Comprehension (MRC) models to match shallow patterns rather than perform inference-oriented reasoning on RC benchmarks.