no code implementations • CL (ACL) 2021 • Jiangming Liu, Shay B. Cohen, Mirella Lapata, Johan Bos
Abstract We consider the task of crosslingual semantic parsing in the style of Discourse Representation Theory (DRT) where knowledge from annotated corpora in a resource-rich language is transferred via bitext to guide learning in other languages.
1 code implementation • 30 Apr 2024 • Hang Du, Sicheng Zhang, Binzhu Xie, Guoshun Nan, Jiayang Zhang, Junrui Xu, Hangyu Liu, Sicong Leng, Jiangming Liu, Hehe Fan, Dajiu Huang, Jing Feng, Linli Chen, Can Zhang, Xuhuan Li, Hao Zhang, Jianhang Chen, Qimei Cui, Xiaofeng Tao
In pursuit of these answers, we present a comprehensive benchmark for Causation Understanding of Video Anomaly (CUVA).
no code implementations • 31 May 2022 • Wenjie Li, Qiaolin Xia, Junfeng Deng, Hao Cheng, Jiangming Liu, Kouying Xue, Yong Cheng, Shu-Tao Xia
As an emerging secure learning paradigm in lever-aging cross-agency private data, vertical federatedlearning (VFL) is expected to improve advertising models by enabling the joint learning of complementary user attributes privately owned by the advertiser and the publisher.
no code implementations • NAACL 2021 • Jiangming Liu, Shay B. Cohen, Mirella Lapata
We propose neural models to generate text from formal meaning representations based on Discourse Representation Structures (DRSs).
no code implementations • 1 Oct 2020 • Matt Gardner, Yoav Artzi, Victoria Basmova, Jonathan Berant, Ben Bogin, Sihao Chen, Pradeep Dasigi, Dheeru Dua, Yanai Elazar, Ananth Gottumukkala, Nitish Gupta, Hanna Hajishirzi, Gabriel Ilharco, Daniel Khashabi, Kevin Lin, Jiangming Liu, Nelson F. Liu, Phoebe Mulcaire, Qiang Ning, Sameer Singh, Noah A. Smith, Sanjay Subramanian, Reut Tsarfaty, Eric Wallace, A. Zhang, Ben Zhou
Unfortunately, when a dataset has systematic gaps (e. g., annotation artifacts), these evaluations are misleading: a model can learn simple decision rules that perform well on the test set but do not capture a dataset's intended capabilities.
no code implementations • ACL 2020 • Jiangming Liu, Shay B. Cohen, Mirella Lapata
Discourse representation structures (DRSs) are scoped semantic representations for texts of arbitrary length.
no code implementations • ACL 2020 • Qiankun Fu, Yue Zhang, Jiangming Liu, Meishan Zhang
Discourse representation tree structure (DRTS) parsing is a novel semantic parsing task which has been concerned most recently.
1 code implementation • Findings of the Association for Computational Linguistics 2020 • Matt Gardner, Yoav Artzi, Victoria Basmova, Jonathan Berant, Ben Bogin, Sihao Chen, Pradeep Dasigi, Dheeru Dua, Yanai Elazar, Ananth Gottumukkala, Nitish Gupta, Hanna Hajishirzi, Gabriel Ilharco, Daniel Khashabi, Kevin Lin, Jiangming Liu, Nelson F. Liu, Phoebe Mulcaire, Qiang Ning, Sameer Singh, Noah A. Smith, Sanjay Subramanian, Reut Tsarfaty, Eric Wallace, Ally Zhang, Ben Zhou
Unfortunately, when a dataset has systematic gaps (e. g., annotation artifacts), these evaluations are misleading: a model can learn simple decision rules that perform well on the test set but do not capture a dataset's intended capabilities.
no code implementations • EMNLP 2020 • Jiangming Liu, Matt Gardner, Shay B. Cohen, Mirella Lapata
Complex reasoning over text requires understanding and chaining together free-form predicates and logical connectives.
no code implementations • ACL 2019 • Jiangming Liu, Shay B. Cohen, Mirella Lapata
We introduce a novel semantic parsing task based on Discourse Representation Theory (DRT; Kamp and Reyle 1993).
no code implementations • WS 2019 • Jiangming Liu, Shay B. Cohen, Mirella Lapata
Our best system achieves a score of 84. 8{\%} F1 in the DRS parsing shared task.
Ranked #2 on DRS Parsing on PMB-2.2.0
1 code implementation • ACL 2018 • Jiangming Liu, Shay B. Cohen, Mirella Lapata
We introduce an open-domain neural semantic parser which generates formal meaning representations in the style of Discourse Representation Theory (DRT; Kamp and Reyle 1993).
no code implementations • NAACL 2018 • Qi Liu, Yue Zhang, Jiangming Liu
It is useful to leveraging data available for all existing domains to enhance performance on different domains.
2 code implementations • TACL 2017 • Jiangming Liu, Yue Zhang
Both bottom-up and top-down strategies have been used for neural transition-based constituent parsing.
Ranked #17 on Constituency Parsing on Penn Treebank
1 code implementation • WS 2017 • Jiangming Liu, Yue Zhang
Starting from NMT, encoder-decoder neu- ral networks have been used for many NLP problems.
no code implementations • EACL 2017 • Jiangming Liu, Yue Zhang
However, they do not explicitly model the contribution of each word in a sentence with respect to targeted sentiment polarities.
1 code implementation • TACL 2017 • Jiangming Liu, Yue Zhang
In particular, we build a bidirectional LSTM model, which leverages the full sentence information to predict the hierarchy of constituents that each word starts and ends.