1 code implementation • 2 Nov 2023 • Yuhan Zhang, Edward Gibson, Forrest Davis
We found that probabilities represented by LMs were more likely to align with human judgments of being "tricked" by the NPI illusion which examines a structural dependency, compared to the comparative and the depth-charge illusions which require sophisticated semantic understanding.
1 code implementation • ACL 2021 • Forrest Davis, Marten Van Schijndel
We show that competing processes in a language act as constraints on model behavior and demonstrate that targeted fine-tuning can re-weight the learned constraints, uncovering otherwise dormant linguistic knowledge in models.
1 code implementation • CONLL 2020 • Forrest Davis, Marten Van Schijndel
Language models (LMs) trained on large quantities of text have been claimed to acquire abstract linguistic representations.
1 code implementation • ACL 2020 • Forrest Davis, Marten van Schijndel
A standard approach to evaluating language models analyzes how models assign probabilities to valid versus invalid syntactic constructions (i. e. is a grammatical sentence more probable than an ungrammatical sentence).