Search Results for author: Parker Seegmiller

Found 7 papers, 1 papers with code

Do LLMs Find Human Answers To Fact-Driven Questions Perplexing? A Case Study on Reddit

no code implementations1 Apr 2024 Parker Seegmiller, Joseph Gatto, Omar Sharif, Madhusudan Basak, Sarah Masud Preum

Large language models (LLMs) have been shown to be proficient in correctly answering questions in the context of online discourse.

Mad Libs Are All You Need: Augmenting Cross-Domain Document-Level Event Argument Data

no code implementations5 Mar 2024 Joseph Gatto, Parker Seegmiller, Omar Sharif, Sarah M. Preum

Our approach leverages the intuition that Mad Libs, which are categorically masked documents used as a part of a popular game, can be generated and solved by LLMs to produce data for DocEAE.

Data Augmentation Event Argument Extraction

Statistical Depth for Ranking and Characterizing Transformer-Based Text Embeddings

1 code implementation23 Oct 2023 Parker Seegmiller, Sarah Masud Preum

We adopt a statistical depth to measure distributions of transformer-based text embeddings, transformer-based text embedding (TTE) depth, and introduce the practical use of this depth for both modeling and distributional inference in NLP pipelines.

Data Augmentation In-Context Learning +2

Text Encoders Lack Knowledge: Leveraging Generative LLMs for Domain-Specific Semantic Textual Similarity

no code implementations12 Sep 2023 Joseph Gatto, Omar Sharif, Parker Seegmiller, Philip Bohlman, Sarah Masud Preum

Additionally, we show generative LLMs significantly outperform existing encoder-based STS models when characterizing the semantic similarity between two texts with complex semantic relationships dependent on world knowledge.

Memorization Semantic Similarity +4

Cannot find the paper you are looking for? You can Submit a new open access paper.