Skip to content

Commit

Permalink
Update README.md
Browse files Browse the repository at this point in the history
  • Loading branch information
kennymckormick authored May 3, 2022
1 parent 4ddb7ac commit 7c2fe44
Showing 1 changed file with 4 additions and 3 deletions.
7 changes: 4 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,8 +14,9 @@ This repo is the official implementation of [PoseConv3D](https://arxiv.org/abs/2

## News

- We release the skeleton annotations (generated by HRNet), config files, and pre-trained checkpoints for Kinetics-400. Note that Kinetics-400 is a large-scale dataset (even for skeleton) and you should have `memcached` and `pymemcache` installed for efficient training and testing on Kinetics-400. <**2022-05-01**>
- We provide an example for processing a custom video dataset (we use diving48), generating 2D skeleton annotations, and using PoseC3D for skeleton-based action recognition. The tutorial for skeleton extraction part is available in [diving48_example](/examples/extract_diving48_skeleton/diving48_example.ipynb). <**2022-04-15**>
- Support skeleton action recognition demo with GCN algorithms (**2022-05-03**).
- Release the skeleton annotations (generated by HRNet), config files, and pre-trained checkpoints for Kinetics-400. Note that Kinetics-400 is a large-scale dataset (even for skeleton) and you should have `memcached` and `pymemcache` installed for efficient training and testing on Kinetics-400 (**2022-05-01**).
- Provide an example for processing a custom video dataset (we use diving48), generating 2D skeleton annotations, and using PoseC3D for skeleton-based action recognition. The tutorial for skeleton extraction part is available in [diving48_example](/examples/extract_diving48_skeleton/diving48_example.ipynb) (**2022-04-15**).

## Supported Algorithms

Expand Down Expand Up @@ -54,7 +55,7 @@ python demo/demo_skeleton.py demo/ntu_sample.avi demo/demo.mp4
python demo/demo_skeleton.py demo/ntu_sample.avi demo/demo.mp4 --config configs/stgcn++/stgcn++_ntu120_xsub_hrnet/j.py --ckpt http://download.openmmlab.com/mmaction/pyskl/ckpt/stgcnpp/stgcnpp_ntu120_xsub_hrnet/j.pth
```

Note that for running demo on an arbitrary input video, you need a tracker to formulate pose estimation results for each frame into multiple skeleton sequences. Currently we are using a [naive tracker]() based on inter-frame pose similarities. You can also try to write your own tracker.
Note that for running demo on an arbitrary input video, you need a tracker to formulate pose estimation results for each frame into multiple skeleton sequences. Currently we are using a [naive tracker](https://github.com/kennymckormick/pyskl/blob/4ddb7ac384e231694fd2b4b7774144e5762862ab/demo/demo_skeleton.py#L192) based on inter-frame pose similarities. You can also try to write your own tracker.

## Training & Testing

Expand Down

0 comments on commit 7c2fe44

Please sign in to comment.