At SMPLCap, we focus on advancing the field of 3D Human Pose and Shape Estimation (HPS). Our goal is to accurately capture detailed full-body human motion—including hands and facial expressions—from various input sources, using parametric human body models such as SMPL and SMPL-X as our standard output representation.
We develop methods and tools to recover full-body pose and shape from diverse input modalities. Our research focuses on EHPS foundation models and benchmarks, robust estimation in perspective-distorted images, world-grounded applications, and one-stage pipelines with integrated localization.
- [2025-04-11] Projects and homepage updated
- [2025-04-10] 🚀🚀🚀Announcing the launch of SMPLCap 🚀🚀🚀
- [SMPL-X] [arXiv'25] SMPLest-X: An extended version of SMPLer-X with stronger foundation models.
- [SMPL-X] [NeurIPS'23] SMPLer-X: Scaling up EHPS towards a family of generalist foundation models.
- [SMPL] [NeurIPS'22] HMR-Benchmarks: A comprehensive benchmark of HPS datasets, backbones, and training strategies.
- [SMPL-X] [ECCV'24] WHAC: World-grounded human pose and camera estimation from monocular videos.
- [SMPL-X] [CVPR'24] AiOS: An all-in-one-stage pipeline combining detection and 3D human reconstruction.
- [SMPL-X] [NeurIPS'23] RoboSMPLX: A framework to enhance the robustness of whole-body pose and shape estimation.
- [SMPL] [ICCV'23] Zolly: 3D human mesh reconstruction from perspective-distorted images.
- [SMPL] [arXiv'23] PointHPS: 3D HPS from point clouds captured in real-world settings.
For inquiries about our research, collaborations, or opportunities, please reach out to us.