no code implementations • Findings (EMNLP) 2021 • Bayu Distiawan Trisedya, Xiaojie Wang, Jianzhong Qi, Rui Zhang, Qingjun Cui
A key component of the GSC-attention is grouped-attention, which is token-level attention constrained within each input attribute that enables our proposed model captures both local and global context.
no code implementations • 26 Aug 2023 • Bayu Distiawan Trisedya, Flora D Salim, Jeffrey Chan, Damiano Spina, Falk Scholer, Mark Sanderson
One of the strategies to address this problem is KG alignment, i. e., forming a more complete KG by merging two or more KGs.
1 code implementation • 18 Jul 2023 • Rui Zhang, Yixin Su, Bayu Distiawan Trisedya, Xiaoyan Zhao, Min Yang, Hong Cheng, Jianzhong Qi
In this paper, we propose the first fully automatic alignment method named AutoAlign, which does not require any manually crafted seed alignments.
no code implementations • 16 Oct 2022 • Rui Zhang, Xiaoyan Zhao, Bayu Distiawan Trisedya, Min Yang, Hong Cheng, Jianzhong Qi
The task of entity alignment between knowledge graphs (KGs) aims to identify every pair of entities from two different KGs that represent the same entity.
no code implementations • ACL 2019 • Bayu Distiawan Trisedya, Gerhard Weikum, Jianzhong Qi, Rui Zhang
This way, NED errors may cause extraction errors that affect the overall precision and recall. To address this problem, we propose an end-to-end relation extraction model for KB enrichment based on a neural encoder-decoder model.
no code implementations • ACL 2018 • Bayu Distiawan Trisedya, Jianzhong Qi, Rui Zhang, Wei Wang
However, this representation is not in a natural language form, which is difficult for humans to understand.
Ranked #12 on Data-to-Text Generation on WebNLG