1 code implementation • 7 Sep 2023 • Thomas Gebhart, John Cobb
In this work, we bridge the gap between traditional transductive knowledge graph embedding approaches and more recent inductive relation prediction models by introducing a generalized form of harmonic extension which leverages representations learned through transductive embedding methods to infer representations of new entities introduced at inference time as in the inductive setting.
no code implementations • 19 Aug 2022 • Thomas Gebhart
Graph convolutional networks are a popular class of deep neural network algorithms which have shown success in a number of relational learning tasks.
1 code implementation • 7 Oct 2021 • Thomas Gebhart, Jakob Hansen, Paul Schrater
Knowledge graph embedding involves learning representations of entities -- the vertices of the graph -- and relations -- the edges of the graph -- such that the resulting representations encode the known factual information represented by the knowledge graph and can be used in the inference of new relations.
no code implementations • 26 Jan 2021 • Thomas Gebhart, Udit Saxena, Paul Schrater
A number of recent approaches have been proposed for pruning neural network parameters at initialization with the goal of reducing the size and computational burden of models while minimally affecting their training dynamics and generalization performance.
no code implementations • NeurIPS Workshop TDA_and_Beyond 2020 • Jakob Hansen, Thomas Gebhart
We present a generalization of graph convolutional networks by generalizing the diffusion operation underlying this class of graph neural networks.
no code implementations • 16 Oct 2019 • Samir Chowdhury, Thomas Gebhart, Steve Huntsman, Matvey Yutin
These results provide a foundation for investigating homological differences between neural network architectures and their realized structure as implied by their parameters.
1 code implementation • 28 Jan 2019 • Thomas Gebhart, Paul Schrater, Alan Hylton
The representations learned by deep neural networks are difficult to interpret in part due to their large parameter space and the complexities introduced by their multi-layer structure.
1 code implementation • 28 Nov 2017 • Thomas Gebhart, Paul Schrater
We outline a detection method for adversarial inputs to deep neural networks.