› Research : CBIM :: 3D Pose Reconstruction
›› 3D Pose Reconstruction
Recent research has shown successful application of part-based algorithms to the task of 2D human pose estimation in images under challenging scenarios. However little effort has been spent on extending them to 3D space where more efficient local priors and anthropometric constraints can be imposed for recovering more accurate 3D pose. We present a method for recovering 3D human pose in monocular images by fitting a 3D deformable part-based model for humans that best explains the part detections in the image. Such an approach is severely challenged by presence of strong ambiguities due to noisy part detection responses and joint motion along the depth. To overcome this we use multi-scale representations for the parts when viewed at different depths. Further, we use constrained search in a learned latent space to restrict the search to only plausible set of human poses in 3D. Finally, a principled strategy is proposed to iteratively perform optimization as a constrained local search in 3D space to maximally align the learned part representations to 2D projections of the recovered pose. We evaluate our framework on HumanEva-I dataset and demonstrate applicability of our approach.