Multi-task Pre-training Language Model for Semantic Network Completion

13 Jan 2022  ·  Da Li, Sen yang, Kele Xu, Ming Yi, Yukai He, Huaimin Wang ·

Semantic networks, such as the knowledge graph, can represent the knowledge leveraging the graph structure. Although the knowledge graph shows promising values in natural language processing, it suffers from incompleteness. This paper focuses on knowledge graph completion by predicting linkage between entities, which is a fundamental yet critical task. Semantic matching is a potential solution as it can deal with unseen entities, which the translational distance based methods struggle with. However, to achieve competitive performance as translational distance based methods, semantic matching based methods require large-scale datasets for the training purpose, which are typically unavailable in practical settings. Therefore, we employ the language model and introduce a novel knowledge graph architecture named LP-BERT, which contains two main stages: multi-task pre-training and knowledge graph fine-tuning. In the pre-training phase, three tasks are taken to drive the model to learn the relationship from triples by predicting either entities or relations. While in the fine-tuning phase, inspired by contrastive learning, we design a triple-style negative sampling in a batch, which greatly increases the proportion of negative sampling while keeping the training time almost unchanged. Furthermore, we propose a new data augmentation method utilizing the inverse relationship of triples to improve the performance and robustness of the model. To demonstrate the effectiveness of our method, we conduct extensive experiments on three widely-used datasets, WN18RR, FB15k-237, and UMLS. The experimental results demonstrate the superiority of our methods, and our approach achieves state-of-the-art results on WN18RR and FB15k-237 datasets. Significantly, Hits@10 indicator is improved by 5% from previous state-of-the-art result on the WN18RR dataset while reaching 100% on the UMLS dataset.

PDF Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Link Prediction FB15k-237 LP-BERT MRR 0.31 # 51
Hits@10 0.490 # 49
Hits@3 0.336 # 42
Hits@1 0.223 # 44
MR 154 # 9
Link Prediction UMLS LP-BERT Hits@10 1.000 # 1
MR 1.18 # 1
Link Prediction WN18RR LP-BERT MRR 0.482 # 33
Hits@10 0.752 # 5
Hits@3 0.563 # 6
Hits@1 0.343 # 55
MR 92 # 6

Methods