Skip to content

[ACM MM 2024] PyTorch Implementation of "ARTS: Semi-Analytical Regressor using Disentangled Skeletal Representations for Human Mesh Recovery from Videos"

Notifications You must be signed in to change notification settings

TangTao-PKU/ARTS

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

55 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

This is the offical Pytorch implementation of the paper:"

ARTS: Semi-Analytical Regressor using Disentangled Skeletal Representations for Human Mesh Recovery from Videos (ACM MM 2024)

Preparation

  1. Install dependencies. This project is developed on Ubuntu 18.04 with NVIDIA 3090 GPUs. We recommend you to use an Anaconda virtual environment.
# Create a conda environment.
conda create -n arts python=3.8
conda activate arts

# Install PyTorch >= 1.2 according to your GPU driver.
conda install pytorch==1.10.0 torchvision==0.11.0 torchaudio==0.10.0 cudatoolkit=11.3 -c pytorch -c conda-forge

# Install other dependencies.
sh requirements.sh
  1. Prepare SMPL layer.
  • For the SMPL layer, We used smplpytorch. The repo is already included in ./smplpytorch folder.
  • Download basicModel_f_lbs_10_207_0_v1.0.0.pkl, basicModel_m_lbs_10_207_0_v1.0.0.pkl, and basicModel_neutral_lbs_10_207_0_v1.0.0.pkl from SMPL (female & male) and SMPL (neutral) to ./smplpytorch/smplpytorch/native/models.

Implementation

Data Preparation

Rename the ./data_final to the ./data. And the ./data directory structure should follow the below hierarchy. Download all the processed annotation files from OneDrive

${Project}  
|-- data  
|   |-- base_data
|   |   |-- J_regressor_extra.npy
|   |   |-- mesh_downsampling.npz
|   |   |-- smpl_mean_params.npz
|   |   |-- smpl_mean_vertices.npy
|   |   |-- SMPL_NEUTRAL.pkl
|   |   |-- spin_model_checkpoint.pth.tar
|   |-- COCO  
|   |   |-- coco_data  
|   |   |-- __init__.py
|   |   |-- dataset.py
|   |   |-- J_regressor_coco.npy
|   |-- Human36M  
|   |   |-- h36m_data  
|   |   |-- __init__.py
|   |   |-- dataset.py 
|   |   |-- J_regressor_h36m_correct.npy
|   |   |-- noise_stats.py
|   |-- MPII  
|   |   |-- mpii_data  
|   |   |-- __init__.py
|   |   |-- dataset.py
|   |-- MPII3D
|   |   |-- mpii3d_data  
|   |   |-- __init__.py
|   |   |-- dataset.py
|   |-- PW3D 
|   |   |-- pw3d_data
|   |   |-- __init__.py
|   |   |-- dataset.py
|   |-- multiple_datasets.py

Quick Demo

  1. Install ViTPose. ARTS uses the off-the-shift 2D pose detectors to detect persons from images. Here we take and install ViTPose.
git clone https://github.com/open-mmlab/mmcv.git
cd mmcv
git checkout v1.3.9
MMCV_WITH_OPS=1 pip install -e .
(or pip install mmcv-full==1.5.0 -f https://download.openmmlab.com/mmcv/dist/cu113/torch1.10.0/index.html)
cd ..
git clone https://github.com/ViTAE-Transformer/ViTPose.git
cd ViTPose
pip install -v -e .
  1. Download the pre-trained ViTPose model vitpose-h-multi-coco.pth from OneDrive. Put it below ./pose_detector folder.
  2. Download the pre-trained ARTS model ARTS_Demo_Model and ARTS_PoseEstimation from GoogleDrive. Put it below ./experiment folder.
  3. Prepare the input video *.mp4 and put it below ./demo folder.
  4. Run. The output is at ./output folder.
# Change 'sample_video' to your video name.
sh ./main/demo.sh 
or
python ./main/run_demo.py --vid_file demo/sample_video.mp4 --gpu 0

Train

Stage 1 : Train the 3D pose estimation.

# Human3.6M
bash train_pose_h36m.sh

# 3DPW
bash train_pose_3dpw.sh

Stage 2: To train the all network for final mesh. Configs of the experiments can be found and edited in ./config folder. Change posenet_path in ./config/train_mesh_*.yml to the path of the pre-trained pose model.

# Human3.6M
bash train_mesh_h36m.sh

# 3DPW & MPII3D
bash train_mesh_3dpw.sh

Test

To test on a pre-trained pose estimation model (Stage 1). Download the pre-trained ARTS model ARTS_Demo_Model(3DPW Dataset) and ARTS_PoseEstimation(3DPW Dataset) from GoogleDrive. Put it below ./experiment folder.

# Human3.6M
bash test_pose_h36m.sh

# 3DPW
bash test_pose_3dpw.sh

To test on a pre-trained mesh model (Stage 2).

# Human3.6M
bash test_mesh_h36m.sh

# 3DPW
bash test_mesh_3dpw.sh

# MPII3D
bash test_mesh_mpii3d.sh

Change the weight_path in the corresponding ./config/test_*.yml to your model path.

Citation

Cite as below if you find this repository is helpful to your project:

@inproceedings{tang2024arts,
  title={ARTS: Semi-Analytical Regressor using Disentangled Skeletal Representations for Human Mesh Recovery from Videos},
  author={Tang, Tao and Liu, Hong and You, Yingxuan and Wang, Ti and Li, Wenhao},
  booktitle={Proceedings of the 32nd ACM International Conference on Multimedia},
  pages={1514--1523},
  year={2024}
}

Acknowledgement

This repo is extended from the excellent work PMCE, Pose2Mesh, TCMR. We thank the authors for releasing the codes.

About

[ACM MM 2024] PyTorch Implementation of "ARTS: Semi-Analytical Regressor using Disentangled Skeletal Representations for Human Mesh Recovery from Videos"

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published