Search Results for author: Forrest Davis

Found 5 papers, 4 papers with code

Can Language Models Be Tricked by Language Illusions? Easier with Syntax, Harder with Semantics

1 code implementation2 Nov 2023 Yuhan Zhang, Edward Gibson, Forrest Davis

We found that probabilities represented by LMs were more likely to align with human judgments of being "tricked" by the NPI illusion which examines a structural dependency, compared to the comparative and the depth-charge illusions which require sophisticated semantic understanding.

Uncovering Constraint-Based Behavior in Neural Models via Targeted Fine-Tuning

1 code implementation ACL 2021 Forrest Davis, Marten Van Schijndel

We show that competing processes in a language act as constraints on model behavior and demonstrate that targeted fine-tuning can re-weight the learned constraints, uncovering otherwise dormant linguistic knowledge in models.

Recurrent Neural Network Language Models Always Learn English-Like Relative Clause Attachment

1 code implementation ACL 2020 Forrest Davis, Marten van Schijndel

A standard approach to evaluating language models analyzes how models assign probabilities to valid versus invalid syntactic constructions (i. e. is a grammatical sentence more probable than an ungrammatical sentence).

Language Modelling Sentence +1

Cannot find the paper you are looking for? You can Submit a new open access paper.