This repo added Z output from the Tensorflow version of 'Openpose' human pose estimation using Convolutional 3D Pose Estimation from a Single Image. The code for the 3D Pose Estimation was taken from Lifting from the Deep by Denis Tome', Chris Russell, Lourdes Agapito.
Original Repo of Lifting from the Deep :
For details, see README in the lifting code.
It is recommended to perform the installation under a Python3 Virtual Environment. To do so, run:
python3 -m venv tf-pose-venv
. tf-pose-venv/bin/activate
To use tf-post-estimation-3d on the Mac OSX, you will need to install swig
, to do so, run:
brew install swig
There is no installation necessary for using the Lifting from the Deep except to download the models. To do so, following the installation instructions of tf-pose-estimation below. When finished, run setup.sh:
bash setup.sh
Unfortunately, docker is not yet implemented with this feature.
'Openpose', human pose estimation algorithm, have been implemented using Tensorflow. It also provides several variants that have some changes to the network structure for real-time processing on the CPU or low-power embedded devices.
You can even run this on your macbook with a descent FPS!
Original Repo(Caffe) : https://github.com/CMU-Perceptual-Computing-Lab/openpose
Implemented features are listed here : features
- 2019.3.12 Add new models using mobilenet-v2 architecture. See : experiments.md
- 2018.5.21 Post-processing part is implemented in c++. It is required compiling the part. See: https://github.com/ildoonet/tf-pose-estimation/tree/master/src/pafprocess
- 2018.2.7 Arguments in run.py script changed. Support dynamic input size.
You need dependencies below.
- python3
- tensorflow 1.4.1+
- opencv3, protobuf, python3-tk
- slidingwindow
- https://github.com/adamrehn/slidingwindow
- I copied from the above git repo to modify few things.
Clone the repo and install 3rd-party libraries.
$ git clone https://www.github.com/ildoonet/tf-pose-estimation
$ cd tf-pose-estimation
$ pip3 install -r requirements.txt
Build c++ library for post processing. See : https://github.com/ildoonet/tf-pose-estimation/tree/master/tf_pose/pafprocess
$ cd tf_pose/pafprocess
$ swig -python -c++ pafprocess.i && python3 setup.py build_ext --inplace
Alternatively, you can install this repo as a shared package using pip.
$ git clone https://www.github.com/ildoonet/tf-pose-estimation
$ cd tf-openpose
$ python setup.py install # Or, `pip install -e .`
See experiments.md
Before running demo, you should download graph files. You can deploy this graph on your mobile or other platforms.
- cmu (trained in 656x368)
- mobilenet_thin (trained in 432x368)
- mobilenet_v2_large (trained in 432x368)
- mobilenet_v2_small (trained in 432x368)
CMU's model graphs are too large for git, so I uploaded them on an external cloud. You should download them if you want to use cmu's original model. Download scripts are provided in the model folder.
$ cd models/graph/cmu
$ bash download.sh
You can test the inference feature with a single image.
$ python run.py --model=mobilenet_thin --resize=432x368 --image=./images/p1.jpg
The image flag MUST be relative to the src folder with no "~", i.e:
--image ../../Desktop
Then you will see the screen as below with pafmap, heatmap, result and etc.
$ python run_webcam.py --model=mobilenet_thin --resize=432x368 --camera=0
Then you will see the realtime webcam screen with estimated poses as below. This Realtime Result was recored on macbook pro 13" with 3.1Ghz Dual-Core CPU.
This pose estimator provides simple python classes that you can use in your applications.
See run.py or run_webcam.py as references.
e = TfPoseEstimator(get_graph_path(args.model), target_size=(w, h))
humans = e.inference(image)
image = TfPoseEstimator.draw_humans(image, humans, imgcopy=False)
If you installed it as a package,
import tf_pose
coco_style = tf_pose.infer(image_path)
See : etcs/ros.md
See : etcs/training.md
See : etcs/reference.md