This project has wrapped a complete pipline of virtual try on based on official implementations of cp-vton, OpenPose and JPPNet.
- Important: Python-3.6.5, Tensorflow-1.13.1, pytorch-1.3.0, torchvision-0.2.1
- Download OpenPose to the
OpenPose
directory and compile it according to its instruction. - Download pre-trained model for JPPNet and put it under
LIP_JPPNet/checkpoint/JPPNet-s2/
. There should be 4 files in this directory: checkpoint, model.ckpt-xxx.data-xxx, model.ckpt-xxx.index, model.ckpt-xxx.meta. - Download pre-trained models for cp-vton and put them under
cp_vton/checkpoints/
. There should be two folders asgmm_train_new
andtom_train_new
in this directory. The authors have not provided the original models but you may download the models from a re-implemented one.
- Put the image and cloth under
data/raw_data/image
anddata/raw_data/cloth
respectively. - Use OpenPose to get the pose information. Please refer to the instructions from OpenPose. Some key parameters:
--image_dir YOUR_IMAGE_PATH --model_pose COCO --write_json RESULT_PATH
. Put the .json result indata/raw_data/pose/
. - Run
python try_on.py
to get the result. - The result will be put in
result/
.