Skip to content
/ pyskl Public
forked from kennymckormick/pyskl

A toolbox for skeleton-based action recognition.

License

Notifications You must be signed in to change notification settings

jie311/pyskl

 
 

Repository files navigation

PYSKL

PWC PWC PWC

PYSKL is a toolbox focusing on action recognition based on SKeLeton data with PYTorch. Various algorithms will be supported for skeleton-based action recognition. We build this project based on the OpenSource Project MMAction2.

This repo is the official implementation of PoseConv3D and STGCN++.


Skeleton-base Action Recognition Results on NTU-RGB+D-120

News

  • Provide scripts to estimate the inference speed of each model (2022-12-30).
  • Support RGBPoseConv3D, a two-stream 3D-CNN for action recognition based on RGB & Human Skeleton. Follow the guide to train and test RGBPoseConv3D on NTURGB+D (2022-12-29).
  • We provide a script (ntu_preproc.py) to generate PYSKL-style annotations files from official NTURGB+D skeleton files (2022-12-20).
  • Support DG-STGCN, which is a state-of-the-art skeleton action algorithm that doesn't rely on a pre-defined graph (2022-12-12).
  • The tech report of PYSKL is accepted by MM 2022 (2022-06-28).

Supported Algorithms

Supported Skeleton Datasets

Installation

git clone https://github.com/kennymckormick/pyskl.git
cd pyskl
# Please first install pytorch according to instructions on the official website: https://pytorch.org/get-started/locally/. Please use pytorch with version smaller than 1.11.0 and larger (or equal) than 1.5.0
# The following command will install mmcv-full 1.5.0 from source, which might be very slow (take ~10 minutes). You can also follow the instruction at https://github.com/open-mmlab/mmcv to install mmcv-full from pre-built wheels, which will be much faster.
pip install -r requirements.txt
pip install -e .

Demo

# Before running the demo, make sure you have installed mmcv-full, mmpose and mmdet. You should first install mmcv-full, and then install mmpose, mmdet.
# You should run the following scripts under the directory `$PYSKL`
# Running the demo with PoseC3D trained on NTURGB+D 120 (Joint Modality), which is the default option. The input file is demo/ntu_sample.avi, the output file is demo/demo.mp4
python demo/demo_skeleton.py demo/ntu_sample.avi demo/demo.mp4
# Running the demo with STGCN++ trained on NTURGB+D 120 (Joint Modality). The input file is demo/ntu_sample.avi, the output file is demo/demo.mp4
python demo/demo_skeleton.py demo/ntu_sample.avi demo/demo.mp4 --config configs/stgcn++/stgcn++_ntu120_xsub_hrnet/j.py --checkpoint http://download.openmmlab.com/mmaction/pyskl/ckpt/stgcnpp/stgcnpp_ntu120_xsub_hrnet/j.pth

Note that for running demo on an arbitrary input video, you need a tracker to formulate pose estimation results for each frame into multiple skeleton sequences. Currently we are using a naive tracker based on inter-frame pose similarities. You can also try to write your own tracker.

Data Preparation

We provide HRNet 2D skeletons for every dataset we supported and Kinect 3D skeletons for the NTURGB+D and NTURGB+D 120 dataset. To obtain the human skeleton annotations, you can:

  1. Use our pre-processed skeleton annotations: we directly provide the processed skeleton data for all datasets as pickle files (which can be directly used for training and testing), check Data Doc for the download links and descriptions of the annotation format.

  2. For NTURGB+D 3D skeletons, you can download the official annotations from https://github.com/shahroudy/NTURGB-D, and use our provided script to generate the processed pickle files. The generated files are the same with the provided ntu60_3danno.pkl and ntu120_3danno.pkl. For detailed instructions, follow the Data Doc.

  3. We also provide scripts to extract 2D HRNet skeletons from RGB videos, you can follow the diving48_example to extract 2D skeletons from an arbitrary RGB video dataset.

You can use vis_skeleton to visualize the provided skeleton data.

Training & Testing

You can use following commands for training and testing. Basically, we support distributed training on a single server with multiple GPUs.

# Training
bash tools/dist_train.sh {config_name} {num_gpus} {other_options}
# Testing
bash tools/dist_test.sh {config_name} {checkpoint} {num_gpus} --out {output_file} --eval top_k_accuracy mean_class_accuracy

For specific examples, please go to the README for each specific algorithm we supported.

Citation

If you use PYSKL in your research or wish to refer to the baseline results published in the Model Zoo, please use the following BibTeX entry and the BibTex entry corresponding to the specific algorithm you used.

@misc{duan2022PYSKL,
  url = {https://arxiv.org/abs/2205.09443},
  author = {Duan, Haodong and Wang, Jiaqi and Chen, Kai and Lin, Dahua},
  title = {PYSKL: Towards Good Practices for Skeleton Action Recognition},
  publisher = {arXiv},
  year = {2022}
}

Contributing

PYSKL is an OpenSource Project under the Apache2 license. Any contribution from the community to improve PYSKL is appreciated. For significant contributions (like supporting a novel & important task), a corresponding part will be added to our updated tech report, while the contributor will also be added to the author list.

Any user can open a PR to contribute to PYSKL. The PR will be reviewed before being merged into the master branch. If you want to open a large PR in PYSKL, you are recommended to first reach me (via my email [email protected]) to discuss the design, which helps to save large amounts of time in the reviewing stage.

Contact

For any questions, feel free to contact: [email protected]

About

A toolbox for skeleton-based action recognition.

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Python 99.3%
  • Other 0.7%