no code implementations • CCL 2020 • Wenyu Guan, Qianying Liu, Tianyi Li, Sujian Li
To solve this problem, we propose a two-step approach which first selects and orders the important data records and then generates text from the noise-reduced data.
no code implementations • LREC 2022 • Sheng Li, Jiyi Li, Qianying Liu, Zhuo Gong
Moreover, based on the speech collection, we proposed a neural network-based frame-by-frame mapping method to recover the speech content by converting from the adversarial speech to the human speech.
no code implementations • 1 Mar 2024 • Athanasios Tragakis, Qianying Liu, Chaitanya Kaul, Swalpa Kumar Roy, Hang Dai, Fani Deligianni, Roderick Murray-Smith, Daniele Faccio
We propose a novel transformer-style architecture called Global-Local Filter Network (GLFNet) for medical image segmentation and demonstrate its state-of-the-art performance.
1 code implementation • 19 Feb 2024 • Zengqing Wu, Shuyuan Zheng, Qianying Liu, Xu Han, Brian Inhyuk Kwon, Makoto Onizuka, Shaojie Tang, Run Peng, Chuan Xiao
Recent advancements have shown that agents powered by large language models (LLMs) possess capabilities to simulate human behaviors and societal dynamics.
no code implementations • 25 Jun 2023 • Qianying Liu, Xiao Gu, Paul Henderson, Fani Deligianni
Semi-supervised learning has demonstrated great potential in medical image segmentation by utilizing knowledge from unlabeled data.
no code implementations • 16 May 2023 • Zhuoyuan Mao, Raj Dabre, Qianying Liu, Haiyue Song, Chenhui Chu, Sadao Kurohashi
This paper studies the impact of layer normalization (LayerNorm) on zero-shot translation (ZST).
no code implementations • 12 May 2023 • Qianying Liu, Dongsheng Yang, Wenjie Zhong, Fei Cheng, Sadao Kurohashi
Numerical reasoning over table-and-text hybrid passages, such as financial reports, poses significant challenges and has numerous potential applications.
1 code implementation • 3 May 2023 • Zhen Wan, Fei Cheng, Zhuoyuan Mao, Qianying Liu, Haiyue Song, Jiwei Li, Sadao Kurohashi
In spite of the potential for ground-breaking achievements offered by large language models (LLMs) (e. g., GPT-3), they still lag significantly behind fully-supervised baselines (e. g., fine-tuned BERT) in relation extraction (RE).
1 code implementation • 29 Nov 2022 • Yibin Shen, Qianying Liu, Zhuoyuan Mao, Fei Cheng, Sadao Kurohashi
Solving math word problems is the task that analyses the relation of quantities and requires an accurate understanding of contextual natural language information.
1 code implementation • 21 Oct 2022 • Zhen Wan, Qianying Liu, Zhuoyuan Mao, Fei Cheng, Sadao Kurohashi, Jiwei Li
Relation extraction (RE) has achieved remarkable progress with the help of pre-trained language models.
1 code implementation • 14 Oct 2022 • Qianying Liu, Chaitanya Kaul, Jun Wang, Christos Anagnostopoulos, Roderick Murray-Smith, Fani Deligianni
For medical image semantic segmentation (MISS), Vision Transformers have emerged as strong alternatives to convolutional neural networks thanks to their inherent ability to capture long-range correlations.
no code implementations • 13 Oct 2022 • Qianying Liu, Wenyu Guan, Jianhao Shen, Fei Cheng, Sadao Kurohashi
To address this problem, we propose a novel search algorithm with combinatorial strategy \textbf{ComSearch}, which can compress the search space by excluding mathematically equivalent equations.
1 code implementation • 21 Sep 2022 • Yibin Shen, Qianying Liu, Zhuoyuan Mao, Zhen Wan, Fei Cheng, Sadao Kurohashi
To solve Math Word Problems, human students leverage diverse reasoning logic that reaches different possible equation solutions.
no code implementations • 18 May 2022 • Zhen Wan, Fei Cheng, Qianying Liu, Zhuoyuan Mao, Haiyue Song, Sadao Kurohashi
Contrastive pre-training on distant supervision has shown remarkable effectiveness in improving supervised relation extraction tasks.
1 code implementation • 8 Apr 2022 • Qianying Liu, Zhuo Gong, Zhengdong Yang, Yuhang Yang, Sheng Li, Chenchen Ding, Nobuaki Minematsu, Hao Huang, Fei Cheng, Chenhui Chu, Sadao Kurohashi
Low-resource speech recognition has been long-suffering from insufficient training data.
no code implementations • 10 Nov 2021 • Qianying Liu, Fei Cheng, Sadao Kurohashi
Meta learning with auxiliary languages has demonstrated promising improvements for cross-lingual natural language processing.
no code implementations • 10 Oct 2020 • Jun Wang, Qianying Liu, Haotian Xie, Zhaogang Yang, Hefeng Zhou
In this paper, the Convolutional Neutral Network (CNN) has been adapted to predict and classify lymph node metastasis in breast cancer.
1 code implementation • 4 Oct 2020 • Qianying Liu, Wenyu Guan, Sujian Li, Fei Cheng, Daisuke Kawahara, Sadao Kurohashi
Automatically solving math word problems is a critical task in the field of natural language processing.
2 code implementations • CCL 2020 • Qianying Liu, Sicong Jiang, Yizhong Wang, Sujian Li
In this paper, we introduce LiveQA, a new question answering dataset constructed from play-by-play live broadcast.
1 code implementation • Findings of the Association for Computational Linguistics 2020 • Ranran Haoran Zhang, Qianying Liu, Aysa Xuemo Fan, Heng Ji, Daojian Zeng, Fei Cheng, Daisuke Kawahara, Sadao Kurohashi
We propose a novel Sequence-to-Unordered-Multi-Tree (Seq2UMTree) model to minimize the effects of exposure bias by limiting the decoding length to three within a triplet and removing the order among triplets.
no code implementations • EMNLP (NLP-COVID19) 2020 • Akiko Aizawa, Frederic Bergeron, Junjie Chen, Fei Cheng, Katsuhiko Hayashi, Kentaro Inui, Hiroyoshi Ito, Daisuke Kawahara, Masaru Kitsuregawa, Hirokazu Kiyomaru, Masaki Kobayashi, Takashi Kodama, Sadao Kurohashi, Qianying Liu, Masaki Matsubara, Yusuke Miyao, Atsuyuki Morishima, Yugo Murawaki, Kazumasa Omura, Haiyue Song, Eiichiro Sumita, Shinji Suzuki, Ribeka Tanaka, Yu Tanaka, Masashi Toyoda, Nobuhiro Ueda, Honai Ueoka, Masao Utiyama, Ying Zhong
The global pandemic of COVID-19 has made the public pay close attention to related news, covering various domains, such as sanitation, treatment, and effects on education.
2 code implementations • 24 Nov 2019 • Daojian Zeng, Ranran Haoran Zhang, Qianying Liu
The model is extremely weak at differing the head and tail entity, resulting in inaccurate entity extraction.
Ranked #12 on Relation Extraction on WebNLG
no code implementations • IJCNLP 2019 • Qianying Liu, Wenyv Guan, Sujian Li, Daisuke Kawahara
To address this problem, we propose a tree-structured decoding method that generates the abstract syntax tree of the equation in a top-down manner.
no code implementations • ALTA 2019 • Wenyv Guan, Qianying Liu, Guangzhi Han, Bin Wang, Sujian Li
The methods first generate a rough sketch in the coarse stage and then use the sketch to get the final result in the fine stage.