Discourse-level Relation Extraction via Graph Pooling

1 Jan 2021  ·  I-Hung Hsu, Xiao Guo, Premkumar Natarajan, Nanyun Peng ·

The ability to capture complex linguistic structures and long-term dependencies among words in the passage is essential for discourse-level relation extraction (DRE) tasks. Graph neural networks (GNNs), one of the methods to encode dependency graphs, have been shown effective in prior works for DRE. However, relatively little attention has been paid to receptive fields of GNNs, which can be crucial for cases with extremely long text that requires discourse understanding. In this work, we leverage the idea of graph pooling and propose to use pooling-unpooling framework on DRE tasks. The pooling branch reduces the graph size and enables the GNNs to obtain larger receptive fields within fewer layers; the unpooling branch restores the pooled graph to its original resolution so that representations for entity mention can be extracted. We propose Clause Matching (CM), a novel linguistically inspired graph pooling method for NLP tasks. Experiments on two DRE datasets demonstrate that our models significantly improve over baselines when modeling long-term dependencies is required, which shows the effectiveness of the pooling-unpooling framework and our CM pooling method.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods