1 code implementation • 8 Feb 2024 • Hye Jin Kim, Nicolas Lell, Ansgar Scherp
The models are evaluated on various chart datasets, and results show that LayoutLMv3 outperforms UDOP in all experiments.
no code implementations • 16 Nov 2023 • Andor Diera, Abdelhalim Dahou, Lukas Galke, Fabian Karl, Florian Sihler, Ansgar Scherp
Language models can serve as a valuable tool for software developers to increase productivity.
1 code implementation • 19 Oct 2023 • Marcel Hoffmann, Lukas Galke, Ansgar Scherp
We study the problem of lifelong graph learning in an open-world scenario, where a model needs to deal with new tasks and potentially unknown classes.
1 code implementation • 19 Jun 2023 • Justin Mücke, Daria Waldow, Luise Metzger, Philipp Schauz, Marcel Hoffman, Nicolas Lell, Ansgar Scherp
Firstly, we propose a regression model trained on a corpus of scientific sentences extracted from peer-reviewed scientific papers and non-scientific text to assign a score that indicates the scientificness of a sentence.
1 code implementation • 15 Jun 2023 • Nicolas Lell, Ansgar Scherp
We investigate flat minima methods and combinations of those methods for training graph neural networks (GNNs).
1 code implementation • 7 Dec 2022 • Andor Diera, Nicolas Lell, Aygul Garifullina, Ansgar Scherp
One such risk is training data extraction from language models that have been trained on datasets, which contain personal and privacy sensitive information.
1 code implementation • 30 Nov 2022 • Fabian Karl, Ansgar Scherp
Short text classification is a crucial and challenging aspect of Natural Language Processing.
Ranked #1 on Text Classification on TREC-10
1 code implementation • 5 Nov 2022 • Johannes Scherer, Ansgar Scherp, Deepayan Bhowmik
Our experiments show that it is possible to extract entities, their properties, relations between entities, and the video category from the generated captions.
no code implementations • 8 Apr 2022 • Lukas Galke, Andor Diera, Bao Xin Lin, Bhakti Khera, Tim Meuser, Tushar Singhal, Fabian Karl, Ansgar Scherp
This study reviews and compares methods for single-label and multi-label text classification, categorized into bag-of-words, sequence-based, graph-based, and hierarchical methods.
Multi-Label Classification Multi Label Text Classification +2
1 code implementation • 11 Mar 2022 • Maximilian Blasi, Manuel Freudenreich, Johannes Horvath, David Richerby, Ansgar Scherp
A graph summary based on equivalence classes preserves pre-defined features of a graph's vertex within a $k$-hop neighborhood such as the vertex labels and edge labels.
1 code implementation • 20 Dec 2021 • Lukas Galke, Iacopo Vagliano, Benedikt Franke, Tobias Zielke, Marcel Hoffmann, Ansgar Scherp
The combination of these two challenges is particularly relevant since newly emerging classes typically resemble only a tiny fraction of the data, adding to the already skewed class distribution.
no code implementations • 29 Sep 2021 • Lukas Paul Achatius Galke, Isabelle Cuber, Christoph Meyer, Henrik Ferdinand Nölscher, Angelina Sonderecker, Ansgar Scherp
We match or exceed the scores of ELMo, and only fall behind more expensive models on linguistic acceptability.
1 code implementation • 17 Sep 2021 • Lukas Galke, Isabelle Cuber, Christoph Meyer, Henrik Ferdinand Nölscher, Angelina Sonderecker, Ansgar Scherp
We match or exceed the scores of ELMo for all tasks of the GLUE benchmark except for the sentiment analysis task SST-2 and the linguistic acceptability task CoLA.
2 code implementations • ACL 2022 • Lukas Galke, Ansgar Scherp
We show that a wide multi-layer perceptron (MLP) using a Bag-of-Words (BoW) outperforms the recent graph-based models TextGCN and HeteGCN in an inductive text classification setting and is comparable with HyperGAT.
no code implementations • 19 May 2021 • M. Lautaro Hickmann, Fabian Wurzberger, Megi Hoxhalli, Arne Lochner, Jessica Töllich, Ansgar Scherp
We observe a high correlation between the attention weights and this reference metric, especially on the the later decoding layers of the transformer architecture.
no code implementations • 18 May 2021 • Fabian Singhofer, Aygul Garifullina, Mathias Kern, Ansgar Scherp
To control the influence of anonymization over unstructured textual data versus structured data attributes, we introduce a modified, parameterized Mondrian algorithm.
1 code implementation • 10 May 2021 • Iacopo Vagliano, Lukas Galke, Ansgar Scherp
In conclusion, it is crucial to consider the semantics of the item co-occurrence for the choice of an appropriate recommendation model and carefully decide which metadata to exploit.
2 code implementations • 11 Feb 2021 • Ishwar Venugopal, Jessica Töllich, Michael Fairbank, Ansgar Scherp
In contrast to existing studies, we evaluate our models' performance at different stages of a process, determined by quartiles of the number of events and normalized quarters of the case duration.
no code implementations • 1 Jan 2021 • Lukas Paul Achatius Galke, Benedikt Franke, Tobias Zielke, Ansgar Scherp
In most cases, i. e., 15 out 18 experiments, we even observe that a temporal window of size 1 is sufficient to retain at least 90%.
1 code implementation • 25 Jun 2020 • Lukas Galke, Benedikt Franke, Tobias Zielke, Ansgar Scherp
Graph neural networks (GNNs) have emerged as the standard method for numerous tasks on graph-structured data such as node classification.
1 code implementation • 22 Jul 2019 • Lukas Galke, Florian Mai, Iacopo Vagliano, Ansgar Scherp
We present multi-modal adversarial autoencoders for recommendation and evaluate them on two different tasks: citation recommendation and subject label recommendation.
1 code implementation • 15 May 2019 • Lukas Galke, Iacopo Vagliano, Ansgar Scherp
In this setup, we compare adapting pretrained graph neural networks against retraining from scratch.
1 code implementation • ICLR 2019 • Florian Mai, Lukas Galke, Ansgar Scherp
In order to address this shortcoming, we propose a learning algorithm for the Continuous Matrix Space Model, which we call Continual Multiplication of Words (CMOW).
1 code implementation • 15 May 2017 • Lukas Galke, Florian Mai, Alan Schelten, Dennis Brunsch, Ansgar Scherp
For the first time, we offer a systematic comparison of classification approaches to investigate how far semantic annotations can be conducted using just the metadata of the documents such as titles published as labels on the Linked Open Data cloud.