A Novel Joint Points and Silhouette-Based Method to Estimate 3D Human Pose and Shape

Research output: Chapter in Book/Report/Conference proceedingPaper in conference proceedingResearchpeer-review

Abstract

This paper presents a novel method for 3D human pose and shape estimation from images with sparse views, using joint points and silhouettes, based on a parametric model. Firstly, the parametric model is fitted to the joint points estimated by deep learning-based human pose estimation. Then, we extract the correspondence between the parametric model of pose fitting and silhouettes in 2D and 3D space. A novel energy function based on the correspondence is built and minimized to fit a parametric model to the silhouettes. Our approach uses comprehensive shape information because the energy function of silhouettes is built from both 2D and 3D space. This also means that our method only needs images from sparse views, which balances data used and the required prior information. Results on synthetic data and real data demonstrate the competitive performance of our approach on pose and shape estimation of the human body.
Original languageEnglish
Title of host publicationPattern Recognition. ICPR International Workshops and Challenges. ICPR 2021
PublisherSpringer
Pages41-56
ISBN (Print)978-3-030-68762-5
DOIs
Publication statusPublished - 2021

Publication series

Name Lecture Notes in Computer Science
PublisherSpringer
Volume12661
ISSN (Electronic)1611-3349

Subject classification (UKÄ)

  • Computer Science
  • Computer Vision and Robotics (Autonomous Systems)

Fingerprint

Dive into the research topics of 'A Novel Joint Points and Silhouette-Based Method to Estimate 3D Human Pose and Shape'. Together they form a unique fingerprint.

Cite this