no code implementations • EMNLP (ClinicalNLP) 2020 • John Pougue Biyong, Bo wang, Terry Lyons, Alejo J Nevado-Holgado
Relying on large pretrained language models such as Bidirectional Encoder Representations from Transformers (BERT) for encoding and adding a simple prediction layer has led to impressive performance in many clinical natural language processing (NLP) tasks.