AvatarPoser: Articulated Full-Body Pose Tracking from Sparse Motion Sensing (ECCV 2022, Official Code)
Jiaxi Jiang1, Paul Streli1, Huajian Qiu1, Andreas Fender1, Larissa Laich2, Patrick Snape2, Christian Holz1
1 Sensing, Interaction & Perception Lab, Department of Computer Science, ETH Zürich, Switzerland
2 Reality Labs at Meta, Zurich, Switzerland
- Please download the datasets
BMLrub
,CMU
, andHDM05
from AMASS. - Download the required body model and placed them in
support_data/body_models
directory of this repository. For SMPL+H body model, download it from http://mano.is.tue.mpg.de/. Please download the AMASS version of the model with DMPL blendshapes. You can obtain dynamic shape blendshapes, e.g. DMPLs, from http://smpl.is.tue.mpg.de - (Optional) If you want to have new random data split, run
generate_split.py
- Run
prepare_data.py
to preprocess the input data for faster training. The data split for training and testing data in our paper is stored under the folderdata_split
.
For training, please run:
python main_train_avatarposer.py -opt options/train_avatarposer.json
For testing, please run:
python main_test_avatarposer.py
Click Pretrained Models to download our pretrained model for AvatarPoser, and put it into model_zoo
.
If your find our paper or codes useful, please cite our work:
@inproceedings{jiang2022avatarposer,
title={AvatarPoser: Articulated Full-Body Pose Tracking from Sparse Motion Sensing},
author={Jiang, Jiaxi and Streli, Paul and Qiu, Huajian and Fender, Andreas and Laich, Larissa and Snape, Patrick and Holz, Christian},
booktitle={Proceedings of European Conference on Computer Vision},
year={2022},
organization={Springer}
}
This project is released under the MIT license. We refer to the code framework in FBCNN and KAIR for network training.