no code implementations • COLING 2022 • Haoxiang Shi, Rongsheng Zhang, Jiaan Wang, Cen Wang, Yinhe Zheng, Tetsuya Sakai
Pre-trained Language Models (PLMs) are the cornerstone of the modern Natural Language Processing (NLP).
no code implementations • 28 May 2024 • Haoxiang Shi, xulong Zhang, Ning Cheng, Yong Zhang, Jun Yu, Jing Xiao, Jianzong Wang
Previous ERC methods relied on simple connections for cross-modal fusion and ignored the information differences between modalities, resulting in the model being unable to focus on modality-specific emotional information.
no code implementations • 28 May 2024 • Jianzong Wang, Haoxiang Shi, Kaiyi Luo, xulong Zhang, Ning Cheng, Jing Xiao
For unpaired data, to effectively capture the latent discriminative features, the high-order relationships between unpaired data and anchors are embedded into the latent subspace, which are computed by efficient linear reconstruction.
no code implementations • 20 May 2024 • Haoxiang Shi, Jiaan Wang, Jiarong Xu, Cen Wang, Tetsuya Sakai
Our preliminary analysis of English text-to-table datasets highlights two key factors for dataset construction: data diversity and data hallucination.
no code implementations • 13 Mar 2024 • ZiQi Liang, Haoxiang Shi, Jiawei Wang, Keda Lu
Recurrent neural networks have become a standard modeling technique for sequential data in TTS systems and are widely used.
no code implementations • 5 Aug 2023 • Haoxiang Shi, Sumio Fujita, Tetsuya Sakai
In addition, consistency filtering often struggles to identify retrieval intentions and recognize query and corpus distributions in a target domain.
1 code implementation • 7 Mar 2023 • Jiaan Wang, Yunlong Liang, Fandong Meng, Zengkui Sun, Haoxiang Shi, Zhixu Li, Jinan Xu, Jianfeng Qu, Jie zhou
In detail, we regard ChatGPT as a human evaluator and give task-specific (e. g., summarization) and aspect-specific (e. g., relevance) instruction to prompt ChatGPT to evaluate the generated results of NLG models.
1 code implementation • 18 Jul 2022 • Jiaan Wang, Tingyi Zhang, Haoxiang Shi
Sports game summarization aims to generate sports news based on real-time commentaries.
no code implementations • Asian Chapter of the Association for Computational Linguistics 2020 • Haoxiang Shi, Cen Wang, Tetsuya Sakai
This paper presents a deep neural architecture which applies the siamese convolutional neural network sharing model parameters for learning a semantic similarity metric between two sentences.
no code implementations • 17 Nov 2020 • Haoxiang Shi, Cen Wang
Contrastive learning is a promising approach to unsupervised learning, as it inherits the advantages of well-studied deep models without a dedicated and complex model design.