Pretext-Contrastive Learning: Toward Good Practices in Self-supervised Video Representation Leaning

29 Oct 2020  ·  Li Tao, Xueting Wang, Toshihiko Yamasaki ·

Recently, pretext-task based methods are proposed one after another in self-supervised video feature learning. Meanwhile, contrastive learning methods also yield good performance. Usually, new methods can beat previous ones as claimed that they could capture "better" temporal information. However, there exist setting differences among them and it is hard to conclude which is better. It would be much more convincing in comparison if these methods have reached as closer to their performance limits as possible. In this paper, we start from one pretext-task baseline, exploring how far it can go by combining it with contrastive learning, data pre-processing, and data augmentation. A proper setting has been found from extensive experiments, with which huge improvements over the baselines can be achieved, indicating a joint optimization framework can boost both pretext task and contrastive learning. We denote the joint optimization framework as Pretext-Contrastive Learning (PCL). The other two pretext task baselines are used to validate the effectiveness of PCL. And we can easily outperform current state-of-the-art methods in the same training manner, showing the effectiveness and the generality of our proposal. It is convenient to treat PCL as a standard training strategy and apply it to many other works in self-supervised video feature learning.

PDF Abstract

Datasets


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Self-Supervised Action Recognition HMDB51 PCL (ResNet-18) Top-1 Accuracy 43.2 # 35
Pre-Training Dataset UCF101 # 1
Frozen false # 1
Self-supervised Video Retrieval UCF101 PCL (R3D) Top-1 40.5 # 10
Self-Supervised Action Recognition UCF101 PCL (ResNet-18) 3-fold Accuracy 82.3 # 32
Pre-Training Dataset UCF101 # 1
Frozen false # 1
Self-supervised Video Retrieval UCF101 PCL (C3D) Top-1 38.2 # 11

Methods