1 code implementation • EMNLP (ACL) 2021 • Wenhao Yu, Meng Jiang, Zhiting Hu, Qingyun Wang, Heng Ji, Nazneen Rajani
Knowledge-enriched text generation poses unique challenges in modeling and learning, driving active research in several core directions, ranging from integrated modeling of neural representations and symbolic information in the sequential/hierarchical/graphical structures, learning without direct supervisions due to the cost of structured annotation, efficient optimization and inference with massive and global constraints, to language grounding on multiple modalities, and generative reasoning with implicit commonsense knowledge and background knowledge.
1 code implementation • 22 Feb 2024 • Carl Edwards, Qingyun Wang, Lawrence Zhao, Heng Ji
Language-molecule models have emerged as an exciting direction for molecular discovery and understanding.
1 code implementation • 19 Jan 2024 • Hongyi Liu, Qingyun Wang, Payam Karisani, Heng Ji
In our experiments, we observed that such a model is prone to mislabeling the source entities, which can often appear in the text, as the target entities.
1 code implementation • 18 Jan 2024 • Qingyun Wang, Zixuan Zhang, Hongxiang Li, Xuan Liu, Jiawei Han, Huimin Zhao, Heng Ji
Fine-grained few-shot entity extraction in the chemical domain faces two unique challenges.
1 code implementation • 23 May 2023 • Qingyun Wang, Doug Downey, Heng Ji, Tom Hope
We explore and enhance the ability of neural language models to generate novel scientific directions grounded in literature.
Contextualized Literature-based Discovery Link Prediction +1
1 code implementation • 25 Aug 2022 • Qingyun Wang, Manling Li, Hou Pong Chan, Lifu Huang, Julia Hockenmaier, Girish Chowdhary, Heng Ji
Goal-oriented generative script learning aims to generate subsequent steps to reach a particular goal, which is an essential task to assist robots or humans in performing stereotypical activities.
1 code implementation • ACL 2021 • Qingyun Wang, Semih Yavuz, Victoria Lin, Heng Ji, Nazneen Rajani
Graph-to-text generation has benefited from pre-trained language models (PLMs) in achieving better performance than structured graph encoders.
Ranked #3 on Data-to-Text Generation on WebNLG (using extra training data)
1 code implementation • INLG (ACL) 2020 • Qingyun Wang, Qi Zeng, Lifu Huang, Kevin Knight, Heng Ji, Nazneen Fatema Rajani
To assist human review process, we build a novel ReviewRobot to automatically assign a review score and write comments for multiple categories such as novelty and meaningful comparison.
3 code implementations • 9 Oct 2020 • Wenhao Yu, Chenguang Zhu, Zaitang Li, Zhiting Hu, Qingyun Wang, Heng Ji, Meng Jiang
To address this issue, researchers have considered incorporating various forms of knowledge beyond the input text into the generation models.
no code implementations • NAACL 2021 • Qingyun Wang, Manling Li, Xuan Wang, Nikolaus Parulian, Guangxing Han, Jiawei Ma, Jingxuan Tu, Ying Lin, Haoran Zhang, Weili Liu, Aabhas Chauhan, Yingjun Guan, Bangzheng Li, Ruisong Li, Xiangchen Song, Yi R. Fung, Heng Ji, Jiawei Han, Shih-Fu Chang, James Pustejovsky, Jasmine Rah, David Liem, Ahmed Elsayed, Martha Palmer, Clare Voss, Cynthia Schneider, Boyan Onyshkevych
To combat COVID-19, both clinicians and scientists need to digest vast amounts of relevant biomedical knowledge in scientific literature to understand the disease mechanism and related biological functions.
2 code implementations • ACL 2019 • Qingyun Wang, Lifu Huang, Zhiying Jiang, Kevin Knight, Heng Ji, Mohit Bansal, Yi Luan
We present a PaperRobot who performs as an automatic research assistant by (1) conducting deep understanding of a large collection of human-written papers in a target domain and constructing comprehensive background knowledge graphs (KGs); (2) creating new ideas by predicting links from the background KGs, by combining graph attention and contextual text attention; (3) incrementally writing some key elements of a new paper based on memory-attention networks: from the input title along with predicted related entities to generate a paper abstract, from the abstract to generate conclusion and future work, and finally from future work to generate a title for a follow-on paper.
1 code implementation • WS 2018 • Qingyun Wang, Xiaoman Pan, Lifu Huang, Boliang Zhang, Zhiying Jiang, Heng Ji, Kevin Knight
We aim to automatically generate natural language descriptions about an input structured knowledge base (KB).
2 code implementations • ACL 2018 • Qingyun Wang, Zhi-Hao Zhou, Lifu Huang, Spencer Whitehead, Boliang Zhang, Heng Ji, Kevin Knight
We present a paper abstract writing system based on an attentive neural sequence-to-sequence model that can take a title as input and automatically generate an abstract.
Ranked #1 on Paper generation on ACL Title and Abstract Dataset
1 code implementation • 1st Proceedings of Alexa Prize (Alexa Prize 2017) 2017 • Jieming Ji, Qingyun Wang, Zev Battad, Jiashun Gou, Jingfei Zhou, Rahul Divekar, Craig Carlson, Mei Si
We experimented with a range of conversational activities, such as providing news and playing games, and strategies for controlling the dialogue flow.
1 code implementation • 2017 Computing in Cardiology (CinC) 2017 • Shenda Hong, Meng Wu, Yuxi Zhou, Qingyun Wang, Junyuan Shang, Hongyan Li, Junqing Xie
We propose ENCASE to combine expert features and DNNs (Deep Neural Networks) together for ECG classification.
Ranked #1 on Time Series Classification on Physionet 2017 Atrial Fibrillation (F1 (Hidden Test Set) metric)
no code implementations • 29 May 2015 • Feifei Shen, Zhenjian Song, Congrui Wu, Jiaqi Geng, Qingyun Wang
Study of general purpose computation by GPU (Graphics Processing Unit) can improve the image processing capability of micro-computer system.