no code implementations • 1 Apr 2024 • Parker Seegmiller, Joseph Gatto, Omar Sharif, Madhusudan Basak, Sarah Masud Preum
Large language models (LLMs) have been shown to be proficient in correctly answering questions in the context of online discourse.
no code implementations • 5 Mar 2024 • Joseph Gatto, Parker Seegmiller, Omar Sharif, Sarah M. Preum
Our approach leverages the intuition that Mad Libs, which are categorically masked documents used as a part of a popular game, can be generated and solved by LLMs to produce data for DocEAE.
1 code implementation • 23 Oct 2023 • Parker Seegmiller, Sarah Masud Preum
We adopt a statistical depth to measure distributions of transformer-based text embeddings, transformer-based text embedding (TTE) depth, and introduce the practical use of this depth for both modeling and distributional inference in NLP pipelines.
no code implementations • 12 Sep 2023 • Joseph Gatto, Omar Sharif, Parker Seegmiller, Philip Bohlman, Sarah Masud Preum
Additionally, we show generative LLMs significantly outperform existing encoder-based STS models when characterizing the semantic similarity between two texts with complex semantic relationships dependent on world knowledge.
no code implementations • 16 Mar 2023 • Parker Seegmiller, Joseph Gatto, Madhusudan Basak, Diane Cook, Hassan Ghasemzadeh, John Stankovic, Sarah Preum
Medications often impose temporal constraints on everyday patient activity.
no code implementations • 17 Jan 2023 • Parker Seegmiller, Joseph Gatto, Abdullah Mamun, Hassan Ghasemzadeh, Diane Cook, John Stankovic, Sarah Masud Preum
It also addresses the challenges of accurately predicting RHBs central to MTCs (e. g., medication intake).
no code implementations • 6 Oct 2022 • Joseph Gatto, Parker Seegmiller, Garrett Johnston, Sarah M. Preum
The processing of entities in natural language is essential to many medical NLP systems.