1 code implementation • journal 2024 • Wenlve Zhou and Zhiheng Zhou
To address this, we propose a novel method called Cross-Modal Knowledge Distillation (CMKD), leveraging VLP models as teacher models to guide the learning process in the target domain, resulting in state-of-the-art performance.
Ranked #1 on Domain Adaptation on ImageCLEF-DA