Skip to content

A virtual try on project based on cp-vton, OpenPose and JPPNet.

Notifications You must be signed in to change notification settings

jiayunz/Virtual-Try-On

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

11 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Virtual-Try-On

This project has wrapped a complete pipline of virtual try on based on official implementations of cp-vton, OpenPose and JPPNet.

Installation

  • Important: Python-3.6.5, Tensorflow-1.13.1, pytorch-1.3.0, torchvision-0.2.1
  • Download OpenPose to theOpenPose directory and compile it according to its instruction.
  • Download pre-trained model for JPPNet and put it under LIP_JPPNet/checkpoint/JPPNet-s2/. There should be 4 files in this directory: checkpoint, model.ckpt-xxx.data-xxx, model.ckpt-xxx.index, model.ckpt-xxx.meta.
  • Download pre-trained models for cp-vton and put them under cp_vton/checkpoints/. There should be two folders as gmm_train_new and tom_train_new in this directory. The authors have not provided the original models but you may download the models from a re-implemented one.

Usage

  • Put the image and cloth under data/raw_data/image and data/raw_data/cloth respectively.
  • Use OpenPose to get the pose information. Please refer to the instructions from OpenPose. Some key parameters: --image_dir YOUR_IMAGE_PATH --model_pose COCO --write_json RESULT_PATH. Put the .json result in data/raw_data/pose/.
  • Run python try_on.py to get the result.
  • The result will be put in result/.

About

A virtual try on project based on cp-vton, OpenPose and JPPNet.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published