Human3.6M: Large Scale Datasets and Predictive Methods for 3D Human Sensing in Natural Environments.

Catalin Ionescu, Dragos Papava, Vlad Olaru, Cristian Sminchisescu

Research output: Contribution to journalArticlepeer-review

1067 Citations (SciVal)


We introduce a new dataset, Human3.6M, of 3.6 Million 3D Human poses, acquired by recording the performance of 11 subjects, under 4 different viewpoints, for training realistic human sensing systems and for evaluating the next generation of human pose estimation models. Besides increasing the size the current state of the art datasets by several orders of magnitude, we aim to complement such datasets with a diverse set of poses encountered in typical human activities (taking photos, posing, greeting, eating, etc.), with synchronized image, motion capture and depth data, and with accurate 3D body scans of all subjects involved. We also provide mixed reality videos where 3D human models are animated using motion capture data and inserted using correct 3D geometry, in complex real environments, viewed with moving cameras, and under occlusion. Finally, we provide large scale statistical models and detailed evaluation baselines for the dataset illustrating its diversity and the scope for improvement by future work in the research community. The dataset and code for the associated large-scale learning models, features, visualization tools, as well as the evaluation server, are available online at
Original languageEnglish
Pages (from-to)1325-1339
JournalIEEE Transactions on Pattern Analysis and Machine Intelligence
Issue number7
Publication statusPublished - 2014

Bibliographical note

Published online 12 december 2013

Subject classification (UKÄ)

  • Mathematics


  • 3D human pose estimation
  • human motion capture data
  • articulated body modeling
  • optimization
  • large scale learning
  • structured prediction
  • Fourier kernel approximations


Dive into the research topics of 'Human3.6M: Large Scale Datasets and Predictive Methods for 3D Human Sensing in Natural Environments.'. Together they form a unique fingerprint.

Cite this