Fusing Posture and Position Representations for Point Cloud-Based Hand Gesture Recognition

3DV 2022  Â·  Alexander Bigalke, Mattias P Heinrich ·

Hand gesture recognition can benefit from directly processing 3D point cloud sequences, which carry rich geometric information and enable the learning of expressive spatio-temporal features. However, currently employed single-stream models cannot sufficiently capture multi-scale features that include both fine-grained local posture variations and global hand movements. We therefore propose a novel dual-stream model, which decouples the learning of local and global features. These are eventually fused in an LSTM for temporal modelling. To induce the global and local stream to capture complementary position and posture features, we propose the use of different 3D learning architectures in both streams. Specifically, state-of-the-art point cloud networks excel at capturing fine posture variations from raw point clouds in the local stream. To track hand movements in the global stream, we combine an encoding with residual basis point sets and a fully-connected DenseNet. We evaluate the method on the Shrec'17 and DHG dataset and report state-of-the-art results at a reduced computational cost. Source code is available at https://github.com/multimodallearning/hand-gesture-posture-position.

PDF Abstract

Datasets


Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Benchmark
Hand Gesture Recognition DHG-14 FPPR-PCD Accuracy 92.0 # 3
Hand Gesture Recognition DHG-28 FPPR-PCD Accuracy 91.7 # 2
Hand Gesture Recognition SHREC 2017 FPPR-PCD 14 gestures accuracy 96.1 # 1
28 gestures accuracy 95.2 # 1

Methods