Search Results for author: Megan D. Bardolph

Found 1 papers, 0 papers with code

Different kinds of cognitive plausibility: why are transformers better than RNNs at predicting N400 amplitude?

no code implementations20 Jul 2021 James A. Michaelov, Megan D. Bardolph, Seana Coulson, Benjamin K. Bergen

Despite being designed for performance rather than cognitive plausibility, transformer language models have been found to be better at predicting metrics used to assess human language comprehension than language models with other architectures, such as recurrent neural networks.

Cannot find the paper you are looking for? You can Submit a new open access paper.