2 code implementations • EMNLP (BlackboxNLP) 2020 • Paul Soulos, Tom McCoy, Tal Linzen, Paul Smolensky
How can neural networks perform so well on compositional tasks even though they lack explicit compositional representations?
no code implementations • EMNLP 2017 • Jungo Kasai, Bob Frank, Tom McCoy, Owen Rambow, Alexis Nasr
We present supertagging-based models for Tree Adjoining Grammar parsing that use neural network architectures and dense vector representation of supertags (elementary trees) to achieve state-of-the-art performance in unlabeled and labeled attachment scores.