Paper

Step fusion: Local and global mutual guidance

Feature alignment is the primary means of fusing multimodal data. We propose a feature alignment method that fully fuses multimodal information, which stepwise shifts and expands feature information from different modalities to have a consistent representation in a feature space. The proposed method can robustly capture high-level interactions between features of different modalities, thus significantly improving the performance of multimodal learning. We also show that the proposed method outperforms other popular multimodal schemes on multiple tasks. Experimental evaluation of ETT and MIT-BIH-Arrhythmia, datasets shows that the proposed method achieves state of the art performance.

Results in Papers With Code
(↓ scroll down to see all results)