Skip to content

CausalLearning/SE-GAN

Repository files navigation

Self-Ensembling GAN for Cross-Domain Semantic Segmentation

This is the official PyTorch implementation of the domain adaptation method in our paper Self-Ensembling GAN for Cross-Domain Semantic Segmentation.

Preparation

The data folder is structured as follows:

├── SegmentationData/
│   ├── Cityscapes/     
|   |   ├── gtFine/
|   |   ├── leftImg8bit/
│   ├── GTA5/
|   |   ├── images/
|   |   ├── labels/
│   ├── Synthia/ 
|   |   ├── images/
|   |   ├── labels/
│   └──       
└── PreTrainedModel/
│   ├── VGG_pretrained_GTA5.pth
│   ├── VGG_pretrained_Synthia.pth
│   ├── DeepLab_resnet_pretrained_init-f81d91e8.pth
...

Training the Task-Guided Style Transfer Network (TGSTN)

$ CUDA_VISIBLE_DEVICES=0 python TGSTN_GTA.py --restore_from /Path/To/VGG_pretrained_GTA5.pth/

Style Transfer

$ CUDA_VISIBLE_DEVICES=0 python StyleTrans_GTA.py --restore_from /Path/To/GTATrans.pth/

Image style transfer results from GTA-5 to CityScapes with different approaches:

Training SE-GAN

$ CUDA_VISIBLE_DEVICES=0 python SEGAN_GTA.py --restore_from /Path/To/DeepLab_resnet_pretrained_init-f81d91e8.pth/

Alternatively, you can use the ResNet-101 model pretrained on the GTA-5 dataset to boost the training of SE-GAN. This model is obtained by simply training with the original GTA5 samples, which yields better performance than the ImageNet-pretrained one.

Pseudo Label Generation

$ CUDA_VISIBLE_DEVICES=0 python GenPseudoLabel_GTA.py --restore_from /Path/To/GTA2Cityscapes.pth/

Self-Training with Pseudo Labels

$ CUDA_VISIBLE_DEVICES=0 python SelfTrain_GTA.py --restore_from /Path/To/GTA2Cityscapes.pth/

Multi-Scale Evaluation

$ CUDA_VISIBLE_DEVICES=0 python MultiScaleTest_GTA.py --restore_from /Path/To/Self-Trained-Model.pth/

Segmentation results on real street-view video sequences from GTA-5 to Cityscapes datasets:

Paper

Self-Ensembling GAN for Cross-Domain Semantic Segmentation

Please cite our paper if you find it useful for your research.

@inproceedings{SEAN,
  title={Self-ensembling GAN for cross-domain semantic segmentation},
  author={Xu, Yonghao and He, Fengxiang and Du, Bo and Tao, Dacheng and Zhang, Liangpei},
  booktitle={IEEE Trans. Multimedia},
  volume={},
  pages={},
  year={2022}
}

Acknowledgement

AdaptSegNet

Fast-Neural-Style

License

This repo is distributed under MIT License. The code can be used for academic purposes only.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages