Attention: This project is in early development stage. So codes and performance are in a very primitive level.
NOTE: This is not official implementation. Original paper is DeepPose: Human Pose Estimation via Deep Neural Networks. SECOND NOTE: This implementation was a project for my Pattern Recognition Course at METU. Codes are in a very primitive level. But people might find them useful.
- TensorFlow (Google's Neural Network Toolbox)
- Python 3.5.x
- Numpy
- SciPy.io (For loading .mat files)
- PIL or Pillow
Edit values in 'LSPGlobals.py' as you want them. All the codes run using values in that file.
python3 GetLSPData.py
This script downloads Leeds Sports Pose Dataset (http://sam.johnson.io/research/lsp.html) and performs resizing as your Neural Network input size. Resized images and their labels are saved into binary files.
Dataset:
Just run:
python3 TrainLSP.py
tensorboard --logdir=/path/to/log-directory #path is '~/Desktop/LSP_data/TrainData' if LSPGlobals.py is unchanged
python3 EvalDeepPose.py
This will get all images placed in '--input_dir' with extension '--input_type' will draw stick figures on images based on estimations from the model. Drawn images will be placed in '--output_dir'.
I recommend you to use ffmpeg to turn videos into images, feed them to network and make video from the drawn images using ffmpeg again.