DVANet: Disentangling View and Action Features for Multi-View Action Recognition

10 Dec 2023  ·  Nyle Siddiqui, Praveen Tirupattur, Mubarak Shah ·

In this work, we present a novel approach to multi-view action recognition where we guide learned action representations to be separated from view-relevant information in a video. When trying to classify action instances captured from multiple viewpoints, there is a higher degree of difficulty due to the difference in background, occlusion, and visibility of the captured action from different camera angles. To tackle the various problems introduced in multi-view action recognition, we propose a novel configuration of learnable transformer decoder queries, in conjunction with two supervised contrastive losses, to enforce the learning of action features that are robust to shifts in viewpoints. Our disentangled feature learning occurs in two stages: the transformer decoder uses separate queries to separately learn action and view information, which are then further disentangled using our two contrastive losses. We show that our model and method of training significantly outperforms all other uni-modal models on four multi-view action recognition datasets: NTU RGB+D, NTU RGB+D 120, PKU-MMD, and N-UCLA. Compared to previous RGB works, we see maximal improvements of 1.5\%, 4.8\%, 2.2\%, and 4.8\% on each dataset, respectively.

PDF Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Action Recognition NTU RGB+D DVANet (RGB only) Accuracy (CS) 93.4 # 13
Accuracy (CV) 98.1 # 6
Action Recognition NTU RGB+D 120 DVANet (RGB only) Accuracy (Cross-Subject) 91.6 # 6
Accuracy (Cross-Setup) 90.4 # 10
Action Recognition N-UCLA DVANet Accuracy (Cross-Subject) 94.4 # 1
Accuracy (Cross-View) 96.5 # 1
Action Recognition In Videos PKU-MMD DVANet (RGB only) X-Sub 95.8 # 2
X-View 95.2 # 3

Methods


No methods listed for this paper. Add relevant methods here