no code implementations • 14 Nov 2019 • Max Raphael Sobroza, Tales Marra, Deok-Hee Kim-Dufor, Claude Berrou
In recent literature, contextual pretrained Language Models (LMs) demonstrated their potential in generalizing the knowledge to several Natural Language Processing (NLP) tasks including supervised Word Sense Disambiguation (WSD), a challenging problem in the field of Natural Language Understanding (NLU).