This is the PyTorch implementation of the color-to-thermal image translation. Special thanks to the open source library DiffIR and Diffusers for helping build LG-Diff. The code is based on the PyTorch implementation of the Diffusers (https://github.com/huggingface/diffusers) and DiffIR (https://github.com/Zj-BinXia/DiffIR). We benefited a lot from this.
Linux or Win10
Python ≥ 3.9
NVIDIA GPU + CuDNN CUDA ≥ 11.3
GPU memory ≥ 40G
Demo- files is used to verify the effectiveness of the local class region guidance strategy on diffusers. It can be trained directly.
For image translation tasks, unaligned video streams can only be achieved in an unsupervised or non-regression manner. In contrast, for regression models to perform better, it is usually required that cross-modal video streams appear in pairs.
- Visual comparisons on unknown challenging instances from InfraredCoast.
- Intuitive cases to illustrate the importance of local class-regional guidance.
AttentionGAN: Training and Testing are followed by https://github.com/Ha0Tang/AttentionGAN.
Pix2Pix: Training and Testing are followed by https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix.
CycleGAN: Training and Testing are followed by https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix.
BicycleGAN: Training and Testing are followed by https://github.com/junyanz/BicycleGAN.
GCGAN: Training and Testing are followed by https://github.com/hufu6371/GcGAN.
DCLGAN: Training and Testing are followed by https://github.com/JunlinHan/DCLGAN.
CUT: Training and Testing are followed by https://github.com/taesungp/contrastive-unpaired-translation.
UNIT: Training and Testing are followed by https://github.com/mingyuliutw/UNIT.
MUNIT: Training and Testing are followed by https://github.com/NVlabs/MUNIT.
DRIT: Training and Testing are followed by https://github.com/HsinYingLee/DRIT.
MSGAN: Training and Testing are followed by https://github.com/HelenMao/MSGAN.
Conditional-GAN: Training and Testing are followed by https://github.com/huggingface/diffusers.