AIM: Adapting Image Models for Efficient Video Action Recognition

6 Feb 2023  ·  Taojiannan Yang, Yi Zhu, Yusheng Xie, Aston Zhang, Chen Chen, Mu Li ·

Recent vision transformer based video models mostly follow the ``image pre-training then finetuning" paradigm and have achieved great success on multiple video benchmarks. However, full finetuning such a video model could be computationally expensive and unnecessary, given the pre-trained image transformer models have demonstrated exceptional transferability. In this work, we propose a novel method to Adapt pre-trained Image Models (AIM) for efficient video understanding. By freezing the pre-trained image model and adding a few lightweight Adapters, we introduce spatial adaptation, temporal adaptation and joint adaptation to gradually equip an image model with spatiotemporal reasoning capability. We show that our proposed AIM can achieve competitive or even better performance than prior arts with substantially fewer tunable parameters on four video action recognition benchmarks. Thanks to its simplicity, our method is also generally applicable to different image pre-trained models, which has the potential to leverage more powerful image foundation models in the future. The project webpage is \url{https://adapt-image-models.github.io/}.

PDF Abstract

Results from the Paper


Ranked #2 on Action Recognition on Diving-48 (using extra training data)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Action Recognition Diving-48 AIM (CLIP ViT-L/14, 32x224) Accuracy 90.6 # 2
Action Classification Kinetics-400 AIM (CLIP ViT-L/14, 32x224) Acc@1 87.5 # 29
Acc@5 97.7 # 16
Action Classification Kinetics-700 AIM (CLIP ViT-L/14, 32x224) Top-1 Accuracy 80.4 # 12

Methods