Lifting 2d Human Pose to 3d : A WeaklySupervised Approach

Lifting 2d Human Pose to 3d : A WeaklySupervised ApproachSandika Biswas, Sanjana Sinha, Kavya Gupta and Brojeshwar BhowmickEmbedded Systems and Robotics,TCS Research and InnovationEmail:{biswas.sandika, sanjana.sinha, gupta.kavya, b.bhowmick}@tcs.com,Abstract—Estimating 3d human pose from monocular imagesis a challenging problem due to the variety and complexity ofhuman poses and the inherent ambiguity in recovering depthfrom the single view. Recent deep learning based methodsshow promising results by using supervised learning on 3dpose annotated datasets. However, the lack of large-scale 3dannotated training data captured under in-the-wild settingsmakes the 3d pose estimation difficult for in-the-wild poses. Fewapproaches have utilized training images from both 3d and 2dpose datasets in a weakly-supervised manner for learning 3dposes in unconstrained settings. In this paper, we propose amethod which can effectively predict 3d human pose from 2dpose using a deep neural network trained in a weakly-supervisedmanner on a combination of ground-truth 3d pose and ground-truth 2d pose. Our method uses re-projection error minimizationas a constraint to predict the 3d locations of body joints, andthis is crucial for training on data where the 3d ground-truth isnot present. Since minimizing re-projection error alone may notguarantee an accurate 3d pose, we also use additional geometricconstraints on skeleton pose to regularize the pose in 3d. Wedemonstrate the superior generalization ability of our method bycross-dataset validation on a challenging 3d benchmark datasetMPI-INF-3DHP containing in the wild 3d pose

PDF

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here