Graph and Temporal Convolutional Networks for 3D Multi-person Pose Estimation in Monocular Videos

22 Dec 2020  ยท  Yu Cheng, Bo wang, Bo Yang, Robby T. Tan ยท

Despite the recent progress, 3D multi-person pose estimation from monocular videos is still challenging due to the commonly encountered problem of missing information caused by occlusion, partially out-of-frame target persons, and inaccurate person detection. To tackle this problem, we propose a novel framework integrating graph convolutional networks (GCNs) and temporal convolutional networks (TCNs) to robustly estimate camera-centric multi-person 3D poses that do not require camera parameters. In particular, we introduce a human-joint GCN, which, unlike the existing GCN, is based on a directed graph that employs the 2D pose estimator's confidence scores to improve the pose estimation results. We also introduce a human-bone GCN, which models the bone connections and provides more information beyond human joints. The two GCNs work together to estimate the spatial frame-wise 3D poses and can make use of both visible joint and bone information in the target frame to estimate the occluded or missing human-part information. To further refine the 3D pose estimation, we use our temporal convolutional networks (TCNs) to enforce the temporal and human-dynamics constraints. We use a joint-TCN to estimate person-centric 3D poses across frames, and propose a velocity-TCN to estimate the speed of 3D joints to ensure the consistency of the 3D pose estimation in consecutive frames. Finally, to estimate the 3D human poses for multiple persons, we propose a root-TCN that estimates camera-centric 3D poses without requiring camera parameters. Quantitative and qualitative evaluations demonstrate the effectiveness of the proposed method.

PDF Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
3D Human Pose Estimation 3DPW GnTCN PA-MPJPE 64.2 # 106
Root Joint Localization Human3.6M GnTCN MRPE 88.1 # 1
3D Human Pose Estimation Human3.6M GnTCN Average MPJPE (mm) 40.9 # 74
Using 2D ground-truth joints No # 2
Multi-View or Monocular Monocular # 1
PA-MPJPE 30.4 # 8
3D Absolute Human Pose Estimation Human3.6M GnTCN MRPE 88.1 # 1
3D Multi-Person Pose Estimation (absolute) MuPoTS-3D GnTCN 3DPCK 45.7 # 3
3D Multi-Person Pose Estimation (root-relative) MuPoTS-3D GnTCN 3DPCK 87.5 # 3

Methods