Skip to content

Zhangjinso/PISE

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

27 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

PISE

The code for our CVPR paper PISE: Person Image Synthesis and Editing with Decoupled GAN

Requirement

conda create -n pise python=3.6
conda install pytorch=1.2 cudatoolkit=10.0 torchvision
pip install scikit-image pillow pandas tqdm dominate natsort 

Data

Data preparation for images and keypoints can follow Pose Transfer and GFLA.

  1. Download deep fashion dataset. You will need to ask a password from dataset maintainers.

  2. Download train/test key points annotations and the dataset list from Google Drive, including fashion-pairs-train.csv, fashion-pairs-test.csv, fashion-annotation-train.csv, fashion-annotation-train.csv, train.lst, test.lst. Put these files under the ./fashion_data directory.

  3. Run the following code to split the train/test dataset.

    python data/generate_fashion_datasets.py
    
  4. Download parsing data, and put these files under the ./fashion_data directory. Parsing data for testing can be found from baidu (fectch code: abcd) or Google drive. Parsing data for training can be found from baidu (fectch code: abcd) or Google drive. You can get the data follow with PGN, and re-organize the labels as you need.

Train

python train.py --name=fashion --model=painet --gpu_ids=0

Note that if you want to train a pose transfer model as well as texture transfer and region editing, just comments the line 177 and 178, and uncomments line 162-176.

For training using multi-gpus, you can refer to issue in GFLA

Test

You can directly download our test results from baidu (fetch code: abcd) or Google drive.
Pre-trained checkpoint of human pose transfer reported in our paper can be found from baidu (fetch code: abcd) or Google drive and put it in the folder (-->results-->fashion).

Pre-Trained checkpoint of texture transfe, region editing, style interpolation used in our paper can be found from baidu(fetch code: abcd) or Google drive. Note that the model need to be changed.

Test by yourself

python test.py --name=fashion --model=painet --gpu_ids=0 

Citation

If you use this code, please cite our paper.

@inproceedings{PISE,
  title={{PISE}: Person Image Synthesis and Editing with Decoupled GAN},
  author={Jinsong, Zhang and Kun, Li and Yu-Kun, Lai and Jingyu, Yang},
  booktitle={Computer Vision and Pattern Recognition (CVPR)},
  year={2021}
}

Acknowledgments

Our code is based on GFLA.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages