Document-Level Relation Extraction with Structure Enhanced Transformer Encoder

Document-level relation extraction aims at discovering relational facts among entity pairs in a document, which has attracted more and more attention in recent years. Most existing methods are mainly summarized as graph-based and transformer-based methods. However, previous transformer-based methods neglect structural information between entities, while graph-based methods are unable to extract structural information effectively on account that they isolate the en-coding stage and structure reasoning stage. In this paper, we propose an effective structure enhanced transformer encoder model (SETE), integrating entity structural information into the transformer encoder. We first define a mention-level graph based on mention dependencies and convert it to a token-level graph. Then we design a dual self-attention mechanism, which enriches the structural and contextual information between entities to increase the vanilla transformer encoder inferential capability. Experiments on three public datasets show that the proposed SETE outperforms previous state-of-the-art methods and further analyses illustrate the interpretability of our model.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Benchmark
Relation Extraction DocRED SETE-Roberta-large F1 63.74 # 8
Ign F1 61.78 # 8

Methods


No methods listed for this paper. Add relevant methods here