no code implementations • 13 Nov 2023 • Ruiquan Ge, Xiangyang Hu, Rungen Huang, Gangyong Jia, Yaqi Wang, Renshu Gu, Changmiao Wang, Elazab Ahmed, Linyan Wang, Juan Ye, Ye Li
In TTMFN, we present a two-stream multimodal co-attention transformer module to take full advantage of the complex relationships between different modalities and the potential connections within the modalities.
no code implementations • 7 May 2021 • Yiming Bao, Jun Wang, Tong Li, Linyan Wang, Jianwei Xu, Juan Ye, Dahong Qian
Specifically, the encoder of a DL model that is pre-trained on the source domain is used to initialize the encoder of a reconstruction model.