This is an pytorch implementation for paper CrossNet: Latent Cross-Consistency for Unpaired Image Translation
The project are build based on pytorch-CycleGAN-and-pix2pix. Thanks a lot to them, and we really got help from their works.
For easier understanding, we modify the project such that
We use a more single and direct structure then the initial project of pytorch-CycleGAN-and-pix2pix, which can be shown as follows.
├─checkpoints
│ └─CrossNet_horse2zebra
├─datasets
├─results
│ └─CrossNet_horse2zebra
├─config.py
├─data.py
├─loss.py
├─networks.py
├─test.py
├─train.py
├─README.md
└─__pycache__
Inspired by the initial project management mechanism, we provide an efficient way for the storage and displaying of models, training data, results, in different experiments.
git clone https://github.com/NeverGiveU/CrossNet-pytorch.git
cd CrossNet-pytorch
Download the dataset of horse2zebra in from Baidu NetDisk.
Unzip the .zip file to datasets
.
Use command python train.py
to start trainning.
Use command python test.py
to test after finishing the training.
Some results can be seen in dir sample
. Notice that, the results from zebra to horse is better than the results in inverse direction.
We found that during the training, the adversarial loss of the generator almostly unchanged, while for descriminator it decreased obviously. We think that is because we trained models in mode of "G first and D next", in which D provided less information for G when updating G. We will try another training mode of "D first and G next" in the following days. And updating of project will be done soon.