4 code implementations • ICLR 2022 • Ningyu Zhang, Luoqiu Li, Xiang Chen, Shumin Deng, Zhen Bi, Chuanqi Tan, Fei Huang, Huajun Chen
Large-scale pre-trained language models have contributed significantly to natural language processing by demonstrating remarkable abilities as few-shot learners.
Ranked #1 on Few-Shot Learning on CR
1 code implementation • ACL 2021 • Shumin Deng, Ningyu Zhang, Luoqiu Li, Hui Chen, Huaixiao Tou, Mosha Chen, Fei Huang, Huajun Chen
Most of current methods to ED rely heavily on training instances, and almost ignore the correlation of event types.
1 code implementation • 6 Apr 2021 • Luoqiu Li, Zhen Bi, Hongbin Ye, Shumin Deng, Hui Chen, Huaixiao Tou
In this paper, we propose a novel legal application of legal provision prediction (LPP), which aims to predict the related legal provisions of affairs.
1 code implementation • 1 Apr 2021 • Luoqiu Li, Xiang Chen, Zhen Bi, Xin Xie, Shumin Deng, Ningyu Zhang, Chuanqi Tan, Mosha Chen, Huajun Chen
Recent neural-based relation extraction approaches, though achieving promising improvement on benchmark datasets, have reported their vulnerability towards adversarial attacks.
1 code implementation • 14 Sep 2020 • Luoqiu Li, Xiang Chen, Hongbin Ye, Zhen Bi, Shumin Deng, Ningyu Zhang, Huajun Chen
Fine-tuning pre-trained models have achieved impressive performance on standard natural language processing benchmarks.