no code implementations • EMNLP (NLP-COVID19) 2020 • Yulia Otmakhova, Karin Verspoor, Timothy Baldwin, Simon Šuster
Efficient discovery and exploration of biomedical literature has grown in importance in the context of the COVID-19 pandemic, and topic-based methods such as latent Dirichlet allocation (LDA) are a useful tool for this purpose.
no code implementations • 25 May 2021 • Simon Šuster, Karin Verspoor, Timothy Baldwin, Jey Han Lau, Antonio Jimeno Yepes, David Martinez, Yulia Otmakhova
The COVID-19 pandemic has driven ever-greater demand for tools which enable efficient exploration of biomedical literature.
no code implementations • 18 Aug 2020 • Karin Verspoor, Simon Šuster, Yulia Otmakhova, Shevon Mendis, Zenan Zhai, Biaoyan Fang, Jey Han Lau, Timothy Baldwin, Antonio Jimeno Yepes, David Martinez
We present COVID-SEE, a system for medical literature discovery based on the concept of information exploration, which builds on several distinct text analysis and natural language processing methods to structure and organise information in publications, and augments search by providing a visual overview supporting exploration of a collection to identify key articles of interest.
2 code implementations • 14 May 2020 • Madhumita Sushil, Simon Šuster, Walter Daelemans
For evaluation of explanations, we create a synthetic sepsis-identification dataset, as well as apply our technique on additional clinical and sentiment analysis datasets.
no code implementations • 16 Oct 2019 • Simon Šuster, Madhumita Sushil, Walter Daelemans
Memory networks have been a popular choice among neural architectures for machine reading comprehension and question answering.
1 code implementation • WS 2018 • Madhumita Sushil, Simon Šuster, Walter Daelemans
We find that the output rule-sets can explain the predictions of a neural network trained for 4-class text classification from the 20 newsgroups dataset to a macro-averaged F-score of 0. 80.
no code implementations • 3 Jul 2018 • Madhumita Sushil, Simon Šuster, Kim Luyckx, Walter Daelemans
We compare the model performance of the feature set constructed from a bag of words to that obtained from medical concepts.
1 code implementation • NAACL 2018 • Simon Šuster, Walter Daelemans
We present a new dataset for machine comprehension in the medical domain.
Ranked #1 on Question Answering on CliCR
no code implementations • 14 Nov 2017 • Madhumita Sushil, Simon Šuster, Kim Luyckx, Walter Daelemans
To understand and interpret the representations, we explore the best encoded features within the patient representations obtained from the autoencoder model.
1 code implementation • 19 Oct 2017 • Pieter Fivez, Simon Šuster, Walter Daelemans
We present an unsupervised context-sensitive spelling correction method for clinical free-text that uses word and character n-gram embeddings.
1 code implementation • WS 2017 • Simon Šuster, Stéphan Tulkens, Walter Daelemans
Clinical NLP has an immense potential in contributing to how clinical practice will be revolutionized by the advent of large scale processing of clinical records.
2 code implementations • WS 2016 • Stéphan Tulkens, Simon Šuster, Walter Daelemans
In this paper, we report a knowledge-based method for Word Sense Disambiguation in the domains of biomedical and clinical text.
1 code implementation • NAACL 2016 • Simon Šuster, Ivan Titov, Gertjan van Noord
We present an approach to learning multi-sense word embeddings relying both on monolingual and bilingual information.
1 code implementation • 6 Sep 2015 • Manuela Hürlimann, Benno Weck, Esther van den Berg, Simon Šuster, and Malvina Nissim
We present a simple and effective approach to authorship verification for Dutch, English, Spanish and Greek, which can be easily ported to yet other languages. We train a binary linear classifier both on the features describing known and unknown documents individually, and on the joint features comparing these two types of documents.
1 code implementation • 31 Aug 2015 • Simon Šuster, Gertjan van Noord, Ivan Titov
Word representations induced from models with discrete latent variables (e. g.\ HMMs) have been shown to be beneficial in many NLP applications.
no code implementations • 7 Feb 2015 • Simon Šuster
We present a language complexity analysis of World of Warcraft (WoW) community texts, which we compare to texts from a general corpus of web English.