LanguageBind: Extending Video-Language Pretraining to N-modality by Language-based Semantic Alignment

The video-language (VL) pretraining has achieved remarkable improvement in multiple downstream tasks. However, the current VL pretraining framework is hard to extend to multiple modalities (N modalities, N>=3) beyond vision and language. We thus propose LanguageBind, taking the language as the bind across different modalities because the language modality is well-explored and contains rich semantics. Specifically, we freeze the language encoder acquired by VL pretraining, then train encoders for other modalities with contrastive learning. As a result, all modalities are mapped to a shared feature space, implementing multi-modal semantic alignment. While LanguageBind ensures that we can extend VL modalities to N modalities, we also need a high-quality dataset with alignment data pairs centered on language. We thus propose VIDAL-10M with Video, Infrared, Depth, Audio and their corresponding Language, naming as VIDAL-10M. In our VIDAL-10M, all videos are from short video platforms with complete semantics rather than truncated segments from long videos, and all the video, depth, infrared, and audio modalities are aligned to their textual descriptions. LanguageBind has achieved superior performance on a wide range of 15 benchmarks covering video, audio, depth, and infrared. Moreover, multiple experiments have provided evidence for the effectiveness of LanguageBind in achieving indirect alignment and complementarity among diverse modalities. Code address: https://github.com/PKU-YuanGroup/LanguageBind

PDF Abstract

Results from the Paper


 Ranked #1 on Zero-shot Audio Classification on VGG-Sound (using extra training data)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Zero-Shot Video Retrieval ActivityNet LanguageBind(ViT-L/14) text-to-video R@1 38.4 # 6
video-to-text R@1 35.7 # 6
text-to-video R@10 77.9 # 7
text-to-video R@5 66.6 # 7
video-to-text R@5 65.8 # 6
video-to-text R@10 77.8 # 6
Zero-Shot Video Retrieval ActivityNet LanguageBind(ViT-H/14) text-to-video R@1 41.0 # 5
video-to-text R@1 39.1 # 5
text-to-video R@10 80.0 # 3
text-to-video R@5 68.4 # 5
video-to-text R@5 69.8 # 3
video-to-text R@10 81.1 # 3
Zero-shot Text to Audio Retrieval AudioCaps LanguageBind(FT) R@10 67.6 # 2
Audio-to-text R@1 19.7 # 3
Zero-shot Text to Audio Retrieval AudioCaps LanguageBind(LoRA) R@10 53.2 # 3
Audio-to-text R@1 12.2 # 4
Zero-shot Audio Classification AudioSet LanguageBind(FT) Test mAP 30.0 # 1
Zero-shot Audio Classification AudioSet LanguageBind(LoRA) Test mAP 27.7 # 2
Zero-shot Text to Audio Retrieval Clotho LanguageBind(FT) text-to-audio R@1 16.7 # 2
text-to-audio R@10 52.0 # 1
Zero-shot Text to Audio Retrieval Clotho LanguageBind(LoRA) text-to-audio R@1 12.1 # 4
text-to-audio R@10 44.0 # 3
Zero-Shot Video Retrieval DiDeMo LanguageBind(ViT-L/14) text-to-video R@1 39.7 # 9
text-to-video R@5 65.5 # 9
text-to-video R@10 73.8 # 9
video-to-text R@1 38.4 # 6
text-to-video Median Rank 2.0 # 1
video-to-text R@5 66.6 # 6
video-to-text R@10 77.9 # 5
Zero-Shot Video Retrieval DiDeMo LanguageBind(ViT-H/14) text-to-video R@1 39.9 # 8
text-to-video R@5 66.1 # 8
text-to-video R@10 74.6 # 8
video-to-text R@1 39.8 # 5
text-to-video Median Rank 2 # 1
video-to-text R@5 67.8 # 5
video-to-text R@10 76.2 # 6
Zero-Shot Environment Sound Classification ESC-50 LanguageBind(LoRA) Accuracy 91.8 # 3
Zero-Shot Environment Sound Classification ESC-50 LanguageBind(FT) Accuracy 94.0 # 2
Zero-Shot Action Recognition Kinetics LanguageBind Top-1 Accuracy 64.1 # 9
Top-5 Accuracy 85.7 # 6
Zero-shot Classification (unified classes) LLVIP LanguageBind Balanced Accuracy 87.2 # 1
Zero-Shot Video Retrieval MSR-VTT LanguageBind(ViT-L/14) text-to-video R@1 42.8 # 6
text-to-video R@5 67.5 # 6
text-to-video R@10 76.0 # 5
video-to-text R@1 38.3 # 6
text-to-video Median Rank 2.0 # 1
video-to-text R@5 65.8 # 4
video-to-text R@10 77.8 # 3
video-to-text Median Rank 3.0 # 2
Zero-Shot Video Retrieval MSR-VTT LanguageBind(ViT-H/14) text-to-video R@1 44.8 # 5
text-to-video R@5 70.0 # 3
text-to-video R@10 78.7 # 4
video-to-text R@1 40.9 # 3
text-to-video Median Rank 2 # 1
video-to-text R@5 66.4 # 3
video-to-text R@10 75.7 # 4
video-to-text Median Rank 2. # 1
Zero-Shot Video Retrieval MSVD LanguageBind(ViT-L/14) text-to-video R@1 54.1 # 3
video-to-text R@1 69.7 # 6
text-to-video R@5 81.1 # 3
text-to-video R@10 88.1 # 3
video-to-text R@5 91.8 # 3
video-to-text R@10 97.9 # 1
text-to-video Median Rank 1.0 # 1
video-to-text Median Rank 1.0 # 1
Zero-Shot Video Retrieval MSVD LanguageBind(ViT-H/14) text-to-video R@1 53.9 # 4
video-to-text R@1 72.0 # 5
text-to-video R@5 80.4 # 4
text-to-video R@10 87.8 # 4
video-to-text R@5 91.4 # 4
video-to-text R@10 96.3 # 4
text-to-video Median Rank 1 # 1
video-to-text Median Rank 1 # 1
Zero-shot Scene Classification (unified classes) NYU Depth v2 LanguageBind Balanced Accuracy 65.1 # 1
Zero-shot Audio Classification VGG-Sound LanguageBind(LoRA) Acc@1 28.9 # 3
Zero-shot Audio Classification VGG-Sound LanguageBind(FT) Acc@1 38.6 # 1

Methods


No methods listed for this paper. Add relevant methods here