The code for our CVPR paper PISE: Person Image Synthesis and Editing with Decoupled GAN
conda create -n pise python=3.6
conda install pytorch=1.2 cudatoolkit=10.0 torchvision
pip install scikit-image pillow pandas tqdm dominate natsort
Data preparation for images and keypoints can follow Pose Transfer
Parsing data for testing can be found from baidu (fectch code: abcd) or Google drive. Parsing data for training can be found from baidu (fectch code: abcd) or Google drive. You can get the data follow with PGN, and re-organize the labels as you need.
python train.py --name=fashion --model=painet --gpu_ids=0
Note that if you want to train a pose transfer model as well as texture transfer and region editing, just comments the line 177 and 178, and uncomments line 162-176.
You can directly download our test results from baidu (fetch code: abcd) or Google drive.
Pre-trained checkpoint of human pose transfer reported in our paper can be found from baidu (fetch code: abcd) or Google drive and put it in the folder (-->results-->fashion).
Pre-Trained checkpoint of texture transfe, region editing, style interpolation used in our paper can be found from baidu(fetch code: abcd) or Google drive. Note that the model need to be changed.
Test by yourself
python test.py --name=fashion --model=painet --gpu_ids=0
If you use this code, please cite our paper.
@inproceedings{PISE,
title={{PISE}: Person Image Synthesis and Editing with Decoupled GAN},
author={Jinsong, Zhang and Kun, Li and Yu-Kun, Lai and Jingyu, Yang},
booktitle={Computer Vision and Pattern Recognition (CVPR)},
year={2021}
}
Our code is based on GFLA.