no code implementations • COLING 2022 • Ziming Li, Yan Zhou, Weibo Zhang, Yaxin Liu, Chuanpeng Yang, Zheng Lian, Songlin Hu
Our model also achieves state-of-the-art performance on a widely used sarcasm dataset.
2 code implementations • 26 Apr 2024 • Zheng Lian, Haiyang Sun, Licai Sun, Zhuofan Wen, Siyuan Zhang, Shun Chen, Hao Gu, Jinming Zhao, Ziyang Ma, Xie Chen, Jiangyan Yi, Rui Liu, Kele Xu, Bin Liu, Erik Cambria, Guoying Zhao, Björn W. Schuller, JianHua Tao
In addition to expanding the dataset size, we introduce a new track around open-vocabulary emotion recognition.
no code implementations • 22 Mar 2024 • Zhuofan Wen, Fengyu Zhang, Siyuan Zhang, Haiyang Sun, Mingyu Xu, Licai Sun, Zheng Lian, Bin Liu, JianHua Tao
Multimodal fusion is a significant method for most multimodal tasks.
no code implementations • 18 Feb 2024 • Kang Chen, Zheng Lian, Haiyang Sun, Bin Liu, JianHua Tao
To address data scarcity, this paper proposes a new data collection pipeline.
1 code implementation • 11 Jan 2024 • Licai Sun, Zheng Lian, Bin Liu, JianHua Tao
Audio-Visual Emotion Recognition (AVER) has garnered increasing attention in recent years for its critical role in creating emotion-ware intelligent machines.
Ranked #3 on Dynamic Facial Expression Recognition on MAFW
Contrastive Learning Dynamic Facial Expression Recognition +3
1 code implementation • 31 Dec 2023 • Licai Sun, Zheng Lian, Kexin Wang, Yu He, Mingyu Xu, Haiyang Sun, Bin Liu, JianHua Tao
Video-based facial affect analysis has recently attracted increasing attention owing to its critical role in human-computer interaction.
Ranked #3 on Dynamic Facial Expression Recognition on FERV39k
Dynamic Facial Expression Recognition Emotion Recognition +2
1 code implementation • 7 Dec 2023 • Zheng Lian, Licai Sun, Haiyang Sun, Kang Chen, Zhuofan Wen, Hao Gu, Bin Liu, JianHua Tao
To bridge this gap, we present the quantitative evaluation results of GPT-4V on 21 benchmark datasets covering 6 tasks: visual sentiment analysis, tweet sentiment analysis, micro-expression recognition, facial emotion recognition, dynamic facial emotion recognition, and multimodal emotion recognition.
1 code implementation • 21 Sep 2023 • Qi Fan, Haolin Zuo, Rui Liu, Zheng Lian, Guanglai Gao
This approach includes two pivotal components: firstly, a noise scheduler that adjusts the type and level of noise in the data to emulate various realistic incomplete situations.
1 code implementation • 5 Jul 2023 • Licai Sun, Zheng Lian, Bin Liu, JianHua Tao
Dynamic facial expression recognition (DFER) is essential to the development of intelligent and empathetic machines.
Ranked #2 on Dynamic Facial Expression Recognition on FERV39k
Dynamic Facial Expression Recognition Facial Expression Recognition
no code implementations • 12 Jun 2023 • Haiyang Sun, FuLin Zhang, Zheng Lian, Yingying Guo, Shilei Zhang
Additionally, considering that humans adjust their perception of emotional words in textual semantic based on certain cues present in speech, we design a novel search space and search for the optimal fusion strategy for the two types of information.
3 code implementations • 18 Apr 2023 • Zheng Lian, Haiyang Sun, Licai Sun, Kang Chen, Mingyu Xu, Kexin Wang, Ke Xu, Yu He, Ying Li, Jinming Zhao, Ye Liu, Bin Liu, Jiangyan Yi, Meng Wang, Erik Cambria, Guoying Zhao, Björn W. Schuller, JianHua Tao
The first Multimodal Emotion Recognition Challenge (MER 2023) was successfully held at ACM Multimedia.
no code implementations • 6 Mar 2023 • Mingyu Xu, Zheng Lian
Partial-label learning (PLL) is an important branch of weakly supervised learning where the single ground truth resides in a set of candidate labels, while the research rarely considers the label imbalance.
1 code implementation • 9 Nov 2022 • Zheng Lian, Mingyu Xu, Lan Chen, Licai Sun, Bin Liu, JianHua Tao
In this paper, we relax this assumption and focus on a more general problem, noisy PLL, where the ground-truth label may not exist in the candidate set.
1 code implementation • COLING 2022 • Yifan Jin, Jiangmeng Li, Zheng Lian, Chengbo Jiao, Xiaohui Hu
However, the quality of the 1-best dependency tree for medical texts produced by an out-of-domain parser is relatively limited so that the performance of medical relation extraction method may degenerate.
1 code implementation • 16 Aug 2022 • Licai Sun, Zheng Lian, Bin Liu, JianHua Tao
With the proliferation of user-generated online videos, Multimodal Sentiment Analysis (MSA) has attracted increasing attention recently.
no code implementations • 23 Jul 2022 • Haiyang Sun, Zheng Lian, Bin Liu, JianHua Tao, Licai Sun, Cong Cai
In this paper, we propose the solution to the Multi-Task Learning (MTL) Challenge of the 4th Affective Behavior Analysis in-the-wild (ABAW) competition.
no code implementations • 25 Mar 2022 • Haiyang Sun, Zheng Lian, Bin Liu, Ying Li, Licai Sun, Cong Cai, JianHua Tao, Meng Wang, Yuan Cheng
Speech emotion recognition (SER) is an important research topic in human-computer interaction.
1 code implementation • 4 Mar 2022 • Zheng Lian, Lan Chen, Licai Sun, Bin Liu, JianHua Tao
To this end, we propose a novel framework for incomplete multimodal learning in conversations, called "Graph Complete Network (GCNet)", filling the gap of existing works.
no code implementations • 17 Feb 2022 • Jiangyan Yi, Ruibo Fu, JianHua Tao, Shuai Nie, Haoxin Ma, Chenglong Wang, Tao Wang, Zhengkun Tian, Ye Bai, Cunhang Fan, Shan Liang, Shiming Wang, Shuai Zhang, Xinrui Yan, Le Xu, Zhengqi Wen, Haizhou Li, Zheng Lian, Bin Liu
Audio deepfake detection is an emerging topic, which was included in the ASVspoof 2021.
no code implementations • 17 Sep 2021 • Zheng Lian, Yanan Zhang, Haichang Li, Rui Wang, Xiaohui Hu
The conventional encoder-decoder framework for image captioning generally adopts a single-pass decoding process, which predicts the target descriptive sentence word by word in temporal order.
no code implementations • 6 Aug 2021 • Sanjana Srivastava, Chengshu Li, Michael Lingelbach, Roberto Martín-Martín, Fei Xia, Kent Vainio, Zheng Lian, Cem Gokmen, Shyamal Buch, C. Karen Liu, Silvio Savarese, Hyowon Gweon, Jiajun Wu, Li Fei-Fei
We introduce BEHAVIOR, a benchmark for embodied AI with 100 activities in simulation, spanning a range of everyday household chores such as cleaning, maintenance, and food preparation.
no code implementations • Interspeech 2020 • Zheng Lian, JianHua Tao, Bin Liu, Jian Huang, Zhanlei Yang, Rongjun Li
Emotion recognition remains a complex task due to speaker variations and low-resource training samples.
Ranked #1 on Speech Emotion Recognition on IEMOCAP (using extra training data)
no code implementations • 24 Oct 2019 • Zheng Lian, Jian-Hua Tao, Bin Liu, Jian Huang
Different from the emotion recognition in individual utterances, we propose a multimodal learning framework using relation and dependencies among the utterances for conversational emotion analysis.
no code implementations • 24 Oct 2019 • Zheng Lian, Jian-Hua Tao, Bin Liu, Jian Huang
Prior works on speech emotion recognition utilize various unsupervised learning approaches to deal with low-resource samples.
no code implementations • 24 Oct 2019 • Zheng Lian, Jian-Hua Tao, Bin Liu, Jian Huang
The secondary task is to learn a common representation where speaker identities can not be distinguished.
no code implementations • 23 Oct 2019 • Zheng Lian, Ya Li, Jian-Hua Tao, Jian Huang, Ming-Yue Niu
To sum up, the contributions of this paper lie in two areas: 1) We visualize concerned areas of human faces in emotion recognition; 2) We analyze the contribution of different face areas to different emotions in real-world conditions through experimental analysis.
no code implementations • 23 Oct 2019 • Zheng Lian, Ya Li, Jian-Hua Tao, Jian Huang
It outperforms the baseline system that is optimized without the contrastive loss function with 1. 14% and 2. 55% in the weighted accuracy and the unweighted accuracy, respectively.
no code implementations • 11 Nov 2018 • Zheng Lian, Ya Li, Jian-Hua Tao, Jian Huang
I have submitted a new version to arXiv:1910. 13806.
1 code implementation • 13 Sep 2018 • Zheng Lian, Ya Li, Jian-Hua Tao, Jian Huang
We test our method in the EmotiW 2018 challenge and we gain promising results.