no code implementations • ICCV 2023 • Peiyan Guan, Renjing Pei, Bin Shao, Jianzhuang Liu, Weimian Li, Jiaxi Gu, Hang Xu, Songcen Xu, Youliang Yan, Edmund Y. Lam
The parallel isomeric attention module is used as the video encoder, which consists of two parallel branches modeling the spatial-temporal information of videos from both patch and frame levels.
Ranked #3 on Video Retrieval on MSR-VTT-1kA
no code implementations • CVPR 2023 • Renjing Pei, Jianzhuang Liu, Weimian Li, Bin Shao, Songcen Xu, Peng Dai, Juwei Lu, Youliang Yan
Pre-training a vison-language model and then fine-tuning it on downstream tasks have become a popular paradigm.
no code implementations • ICCV 2023 • Bin Shao, Jianzhuang Liu, Renjing Pei, Songcen Xu, Peng Dai, Juwei Lu, Weimian Li, Youliang Yan
However, compared to image-language pre-training, VLP has lagged far behind due to the lack of large amounts of video-text pairs.
no code implementations • 15 Dec 2019 • Weimian Li, Baoyang Chen, Wenmin Wang
By means of integrating different latent variables with learned transformation features, the model could learn more various possible motion modes.
no code implementations • 13 Jun 2017 • Xiongtao Chen, Wenmin Wang, Jinzhuo Wang, Weimian Li, Baoyang Chen
In this paper, we present a novel deep architecture called bidirectional predictive network (BiPN) that predicts intermediate frames from two opposite directions.