no code implementations • Findings (EMNLP) 2021 • Gustavo Aguilar, Bryan McCann, Tong Niu, Nazneen Rajani, Nitish Keskar, Thamar Solorio
To alleviate these challenges, we propose a character-based subword module (char2subword) that learns the subword embedding table in pre-trained models like BERT.
no code implementations • 30 Mar 2020 • Isabela Albuquerque, Nikhil Naik, Junnan Li, Nitish Keskar, Richard Socher
Self-supervised feature representations have been shown to be useful for supervised classification, few-shot learning, and adversarial robustness.
Ranked #116 on Domain Generalization on PACS