-->

3D Human Pose Estimation in RGBD Images for Robotic Task Learning

Abstract

We propose an approach to estimate 3D human pose in real world units from a single RGBD image and show that it exceeds performance of monocular 3D pose estimation approaches from color as well as pose estimation exclusively from depth. Our approach builds on robust human keypoint detectors for color images and incorporates depth for lifting into 3D. We combine the system with our learning from demonstration framework to instruct a service robot without the need of markers. Experiments in real world settings demonstrate that our approach enables a PR2 robot to imitate manipulation actions observed from a human teacher.



Downloads


Video

References

C. Zimmermann, T.Welschehold, C.Dornhege, W.Burgard and T. Brox, “3D Human Pose Estimation in RGBD Images for Robotic Task Learning “, ICRA 2018. [BibTex]