CGAP2: Context and gap aware predictive pose framework for early detection of gestures

18 Nov 2020  ·  Nishant Bhattacharya, Suresh Sundaram ·

With a growing interest in autonomous vehicles' operation, there is an equally increasing need for efficient anticipatory gesture recognition systems for human-vehicle interaction. Existing gesture-recognition algorithms have been primarily restricted to historical data. In this paper, we propose a novel context and gap aware pose prediction framework(CGAP2), which predicts future pose data for anticipatory recognition of gestures in an online fashion. CGAP2 implements an encoder-decoder architecture paired with a pose prediction module to anticipate future frames followed by a shallow classifier. CGAP2 pose prediction module uses 3D convolutional layers and depends on the number of pose frames supplied, the time difference between each pose frame, and the number of predicted pose frames. The performance of CGAP2 is evaluated on the Human3.6M dataset with the MPJPE metric. For pose prediction of 15 frames in advance, an error of 79.0mm is achieved. The pose prediction module consists of only 26M parameters and can run at 50 FPS on the NVidia RTX Titan. Furthermore, the ablation study indicates supplying higher context information to the pose prediction module can be detrimental for anticipatory recognition. CGAP2 has a 1-second time advantage compared to other gesture recognition systems, which can be crucial for autonomous vehicles.

PDF Abstract

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods