no code implementations • 27 Apr 2024 • Tao Meng, FuChen Zhang, Yuntao Shou, Wei Ai, Nan Yin, Keqin Li
Since consistency and complementarity information correspond to low-frequency and high-frequency information, respectively, this paper revisits the problem of multimodal emotion recognition in conversation from the perspective of the graph spectrum.
no code implementations • 27 Apr 2024 • Yuntao Shou, Tao Meng, FuChen Zhang, Nan Yin, Keqin Li
Specifically, on the one hand, in the feature disentanglement stage, we propose a Broad Mamba, which does not rely on a self-attention mechanism for sequence modeling, but uses state space models to compress emotional representation, and utilizes broad learning systems to explore the potential data distribution in broad space.
no code implementations • 2 Apr 2024 • Nan Yin, Mengzhu Wan, Li Shen, Hitesh Laxmichand Patel, Baopu Li, Bin Gu, Huan Xiong
Inspired by recent spiking neural networks (SNNs), which emulate a biological inference process and provide an energy-efficient neural architecture, we incorporate the SNNs with CGNNs in a unified framework, named Continuous Spiking Graph Neural Networks (COS-GNN).
no code implementations • 7 Mar 2024 • Wei Ju, Siyu Yi, Yifan Wang, Zhiping Xiao, Zhengyang Mao, Hourun Li, Yiyang Gu, Yifang Qin, Nan Yin, Senzhang Wang, Xinwang Liu, Xiao Luo, Philip S. Yu, Ming Zhang
To tackle these issues, substantial efforts have been devoted to improving the performance of GNN models in practical real-world scenarios, as well as enhancing their reliability and robustness.
no code implementations • 1 Feb 2024 • Anke Tang, Li Shen, Yong Luo, Nan Yin, Lefei Zhang, DaCheng Tao
A notable challenge is mitigating the interference between parameters of different models, which can substantially deteriorate performance.
no code implementations • 7 Jan 2024 • Honghe Dai, Site Mo, Haoxin Wang, Nan Yin, Songhai Fan, Bixiong Li
The pre-insertion resistors (PIR) within high-voltage circuit breakers are critical components and warm up by generating Joule heat when an electric current flows through them.
no code implementations • 15 Dec 2023 • Nan Yin, Mengzhu Wang, Zhenghan Chen, Giulia De Masi, Bin Gu, Huan Xiong
Current work often uses SNNs instead of Recurrent Neural Networks (RNNs) by using binary features instead of continuous ones for efficient training, which would overlooks graph structure information and leads to the loss of details during propagation.
no code implementations • 11 Dec 2023 • Tao Meng, Yuntao Shou, Wei Ai, Nan Yin, Keqin Li
The main task of Multimodal Emotion Recognition in Conversations (MERC) is to identify the emotions in modalities, e. g., text, audio, image and video, which is a significant development direction for realizing machine intelligence.
no code implementations • 11 Dec 2023 • Haoxin Wang, Yipeng Mo, Nan Yin, Honghe Dai, Bixiong Li, Songhai Fan, Site Mo
In recent developments, predictive models for multivariate time series analysis have exhibited commendable performance through the adoption of the prevalent principle of channel independence.
no code implementations • 10 Dec 2023 • Yuntao Shou, Tao Meng, Wei Ai, Nan Yin, Keqin Li
Unlike the traditional single-utterance multi-modal emotion recognition or single-modal conversation emotion recognition, MCER is a more challenging problem that needs to deal with more complex emotional interaction relationships.
no code implementations • 8 Oct 2023 • Qinglun Li, Miao Zhang, Nan Yin, Quanjun Yin, Li Shen
To further improve algorithm performance and alleviate local heterogeneous overfitting in Federated Learning (FL), our algorithm combines the Sharpness Aware Minimization (SAM) optimizer and local momentum.
no code implementations • 31 Aug 2023 • Enneng Yang, Zhenyi Wang, Li Shen, Nan Yin, Tongliang Liu, Guibing Guo, Xingwei Wang, DaCheng Tao
Next, we train the CL model by minimizing the gap between the responses of the CL model and the black-box API on synthetic data, to transfer the API's knowledge to the CL model.
no code implementations • 8 Jun 2023 • Nan Yin, Li Shen, Mengzhu Wang, Long Lan, Zeyu Ma, Chong Chen, Xian-Sheng Hua, Xiao Luo
Although graph neural networks (GNNs) have achieved impressive achievements in graph classification, they often need abundant task-specific labels, which could be extensively costly to acquire.
no code implementations • 1 Jan 2021 • Nan Yin, Zhigang Luo, Wenjie Wang, Fuli Feng, Xiang Zhang
In general, DyHCN consists of a Hypergraph Convolution (HC) to encode the hypergraph structure at a time point and a Temporal Evolution module (TE) to capture the varying of the relations.