no code implementations • nlppower (ACL) 2022 • Amr Keleg, Matthias Lindemann, Danyang Liu, Wanqiu Long, Bonnie L. Webber
Automatic evaluation indicates that removing straplines and noise from the training data of a news summarizer results in higher quality summaries, with improvements as high as 7 points ROUGE score.
no code implementations • 20 Oct 2023 • Guillem Ramírez, Matthias Lindemann, Alexandra Birch, Ivan Titov
To curtail the frequency of these calls, one can employ a smaller language model -- a student -- which is continuously trained on the responses of the LLM.
no code implementations • 1 Oct 2023 • Matthias Lindemann, Alexander Koller, Ivan Titov
Strong inductive biases enable learning from little data and help generalization outside of the training distribution.
1 code implementation • 26 May 2023 • Matthias Lindemann, Alexander Koller, Ivan Titov
Our model outperforms pretrained seq2seq models and prior work on realistic semantic parsing tasks that require generalization to longer examples.
1 code implementation • 6 Oct 2022 • Matthias Lindemann, Alexander Koller, Ivan Titov
Seq2seq models have been shown to struggle with compositional generalisation, i. e. generalising to new and potentially more complex structures than seen during training.
1 code implementation • EMNLP 2020 • Matthias Lindemann, Jonas Groschwitz, Alexander Koller
AM dependency parsing is a linguistically principled method for neural semantic parsing with high accuracy across multiple graphbanks.
1 code implementation • COLING 2020 • Lucia Donatelli, Jonas Groschwitz, Alexander Koller, Matthias Lindemann, Pia Weißenhorn
The emergence of a variety of graph-based meaning representations (MRs) has sparked an important conversation about how to adequately represent semantic structure.
no code implementations • CONLL 2019 • Lucia Donatelli, Meaghan Fowlie, Jonas Groschwitz, Alex Koller, er, Matthias Lindemann, Mario Mina, Pia Wei{\ss}enhorn
We describe the Saarland University submission to the shared task on Cross-Framework Meaning Representation Parsing (MRP) at the 2019 Conference on Computational Natural Language Learning (CoNLL).
1 code implementation • ACL 2019 • Matthias Lindemann, Jonas Groschwitz, Alexander Koller
Most semantic parsers that map sentences to graph-based meaning representations are hand-designed for specific graphbanks.
no code implementations • WS 2019 • Asad Sayeed, Matthias Lindemann, Vera Demberg
Sentences like {``}Every child climbed a tree{''} have at least two interpretations depending on the precedence order of the universal quantifier and the indefinite.
no code implementations • ACL 2018 • Jonas Groschwitz, Matthias Lindemann, Meaghan Fowlie, Mark Johnson, Alexander Koller
We present a semantic parser for Abstract Meaning Representations which learns to parse strings into tree representations of the compositional structure of an AMR graph.