Search Results for author: Angus Brayne

Found 5 papers, 1 papers with code

On Masked Language Models for Contextual Link Prediction

no code implementations DeeLIO (ACL) 2022 Angus Brayne, Maciej Wiatrak, Dane Corneil

In the real world, many relational facts require context; for instance, a politician holds a given elected position only for a particular timespan.

Knowledge Graph Embedding Knowledge Graphs +1

Retrieve to Explain: Evidence-driven Predictions with Language Models

1 code implementation6 Feb 2024 Ravi Patel, Angus Brayne, Rogier Hintzen, Daniel Jaroslawicz, Georgiana Neculae, Dane Corneil

R2E is a retrieval-based language model that prioritizes amongst a pre-defined set of possible answers to a research question based on the evidence in a document corpus, using Shapley values to identify the relative importance of pieces of evidence to the final prediction.

Language Modelling Retrieval

Proxy-based Zero-Shot Entity Linking by Effective Candidate Retrieval

no code implementations30 Jan 2023 Maciej Wiatrak, Eirini Arvaniti, Angus Brayne, Jonas Vetterle, Aaron Sim

A recent advancement in the domain of biomedical Entity Linking is the development of powerful two-stage algorithms, an initial candidate retrieval stage that generates a shortlist of entities for each mention, followed by a candidate ranking stage.

Entity Linking Metric Learning +1

Pseudo-Riemannian Embedding Models for Multi-Relational Graph Representations

no code implementations2 Dec 2022 Saee Paliwal, Angus Brayne, Benedek Fabian, Maciej Wiatrak, Aaron Sim

In this paper we generalize single-relation pseudo-Riemannian graph embedding models to multi-relational networks, and show that the typical approach of encoding relations as manifold transformations translates from the Riemannian to the pseudo-Riemannian case.

Graph Embedding Knowledge Graph Completion +1

Directed Graph Embeddings in Pseudo-Riemannian Manifolds

no code implementations16 Jun 2021 Aaron Sim, Maciej Wiatrak, Angus Brayne, Páidí Creed, Saee Paliwal

The inductive biases of graph representation learning algorithms are often encoded in the background geometry of their embedding space.

Graph Representation Learning Link Prediction

Cannot find the paper you are looking for? You can Submit a new open access paper.