Code for paper LAFITE: Towards Language-Free Training for Text-to-Image Generation (CVPR 2022)
Update more details later.
The implementation is based on stylegan2-ada-pytorch and CLIP, the required packages can be found in the links.
Example:
python dataset_tool.py --source=./path_to_some_dataset/ --dest=./datasets/some_dataset.zip --width=256 --height=256 --transform=center-crop
the files at ./path_to_some_dataset/ should be like:
./path_to_some_dataset/
├ 1.png
├ 1.txt
├ 2.png
├ 2.txt
├ ...
We provide links to several commonly used datasets that we have already processed (with CLIP-ViT/B-32):
Multi-modal CelebA-HQ Training Set
Multi-modal CelebA-HQ Testing Set
Example:
Training with ground-truth pairs
python train.py --gpus=4 --outdir=./outputs/ --temp=0.5 --itd=10 --itc=10 --gamma=10 --data=./datasets/COCO2014_train_CLIP_ViTB32.zip --test_data=./datasets/COCO2014_val_CLIP_ViTB32.zip --mixing_prob=0.0
Training with language-free methods (pseudo image-text feature pairs)
python train.py --gpus=4 --outdir=./outputs/ --temp=0.5 --itd=10 --itc=10 --gamma=10 --data=./datasets/COCO2014_train_CLIP_ViTB32.zip --test_data=./datasets/COCO2014_val_CLIP_ViTB32.zip --mixing_prob=1.0
Here we provide several pre-trained models (on google drive).
Model trained on MS-COCO, Language-free (Lafite-G), CLIP-ViT/B-32
Model trained on MS-COCO, Language-free (Lafite-NN), CLIP-ViT/B-32
Model trained on MS-COCO with Ground-truth Image-text Pairs, CLIP-ViT/B-32
Model trained on MS-COCO with Ground-truth Image-text Pairs, CLIP-ViT/B-16
Model Pre-trained On Google CC3M
@article{zhou2021lafite,
title={LAFITE: Towards Language-Free Training for Text-to-Image Generation},
author={Zhou, Yufan and Zhang, Ruiyi and Chen, Changyou and Li, Chunyuan and Tensmeyer, Chris and Yu, Tong and Gu, Jiuxiang and Xu, Jinhui and Sun, Tong},
journal={arXiv preprint arXiv:2111.13792},
year={2021}
}
Please contact [email protected] if you have any question.